S-values calculated from a tomographic head/brain model for brain imaging
NASA Astrophysics Data System (ADS)
Chao, Tsi-chian; Xu, X. George
2004-11-01
A tomographic head/brain model was developed from the Visible Human images and used to calculate S-values for brain imaging procedures. This model contains 15 segmented sub-regions including caudate nucleus, cerebellum, cerebral cortex, cerebral white matter, corpus callosum, eyes, lateral ventricles, lenses, lentiform nucleus, optic chiasma, optic nerve, pons and middle cerebellar peduncle, skull CSF, thalamus and thyroid. S-values for C-11, O-15, F-18, Tc-99m and I-123 have been calculated using this model and a Monte Carlo code, EGS4. Comparison of the calculated S-values with those calculated from the MIRD (1999) stylized head/brain model shows significant differences. In many cases, the stylized head/brain model resulted in smaller S-values (as much as 88%), suggesting that the doses to a specific patient similar to the Visible Man could have been underestimated using the existing clinical dosimetry.
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Calculation of fuel economy values for... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later Model...
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako
2016-11-01
Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.
Modeling of HF propagation at high latitudes on the basis of IRI
NASA Astrophysics Data System (ADS)
Blagoveshchensky, D. V.; Maltseva, O. A.; Anishin, M. M.; Rogov, D. D.; Sergeeva, M. A.
2016-02-01
The paper presents the results of comparison between the modeling calculations and ionograms of oblique sounding for high-latitude HF radio paths of Arctic and Antarctic Research Institute (AARI), which was fulfilled for February 13-14, 2014 (quiet conditions). The International Reference Ionosphere 2012 model of the ionosphere (IRI-2012) was used for the study. The comparison results prove that without adaptation to current diagnostics the IRI model does not reflect the real state of high latitude ionosphere even for quiet conditions. It was found that in general the observed maximum usable frequency values (MUF) exceeded the same values obtained from the model. The adaptation of the model to current diagnostics makes the simulated MUF values significantly closer to the observed MUF. The following parameters were used for the study: critical frequencies foF2 measured by ionosondes located near the considered paths, frequencies calculated on the basis of observed TEC values and median values of the equivalent slab thickness of the ionosphere. The relative error of calculation of MUF values averaged for all the cases for one hop was 23.6% by the initial IRI model. This error was decreased by 4% for the calculations on the basis of observed ТЕС and by 6% for the adaptation to foF2. The higher the latitude of the studied radio path, the more the difference between the observed and simulated MUF values. The conclusion was made that a principal cause of this difference was the deviation of calculated maximum ionospheric height values (hmF2) from the observed hmF2. The additional model update using hmF2 values obtained from Tromso station let to better match between the calculated MUF values and the observed MUF values for all radio paths. The analysis of experimental data showed that the non-predicted events (like traveling ionospheric disturbances, M- and N-modes, lateral modes, triplets, unusual scatter effects, etc.) sometimes took place at high latitude paths even during the quiet conditions.
Stock price prediction using geometric Brownian motion
NASA Astrophysics Data System (ADS)
Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM
2018-03-01
Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.
The Easy Way of Finding Parameters in IBM (EWofFP-IBM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turkan, Nureddin
E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less
Development of MATLAB Scripts for the Calculation of Thermal Manikin Regional Resistance Values
2016-01-01
CALCULATION OF THERMAL MANIKIN REGIONAL RESISTANCE VALUES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL NOTE TN16-1 DEVELOPMENT OF MATLAB® SCRIPTS FOR THE CALCULATION OF THERMAL MANIKIN REGIONAL RESISTANCE VALUES...performed by thermal manikin and modeling personnel. Steps to operate the scripts as well as the underlying calculations are outlined in detail
40 CFR 600.209-85 - Calculation of fuel economy values for labeling.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Calculation of fuel economy values for... (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later Model Year Automobiles...
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... economy data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... city, highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using...
NASA Astrophysics Data System (ADS)
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
Interpretation of nitric oxide profile observed in January 1992 over Kiruna
NASA Astrophysics Data System (ADS)
Kondo, Y.; Kawa, S. R.; Lary, D.; Sugita, T.; Douglass, Anne R.; Lutman, E.; Koike, M.; Deshler, T.
1996-05-01
NO mixing ratios measured from Kiruna (68°N, 20°E), Sweden, on January 22, 1992, revealed values much smaller than those observed at midlatitude near equinox and had a sharper vertical gradient around 25 km. Location of the measurements was close to the terminator and near the edge of the polar vortex, which is highly distorted from concentric flow by strong planetary wave activities. These conditions necessitate accurate calculation, properly taking into account the transport and photochemical processes, in order to quantitatively explain the observed NO profile. A three-dimensional chemistry and transport model (CTM) and a trajectory model (TM) were used to interpret the profile observations within their larger spatial, temporal, and chemical context. The NOy profile calculated by the CTM is in good agreement with that observed on January 31, 1992. In addition, model NOy profiles show small variabilities depending on latitudes, and they change little between January 22 and 31. The TM uses the observed NOy values. The NO values calculated by the CTM and TM agree with observations up to 27 km. Between 20 and 27 km the NO values calculated by the trajectory model including only gas phase chemistry are much larger than those including heterogeneous chemistry, indicating that NO mixing ratios were reduced significantly by heterogeneous chemistry on sulfuric acid aerosols. Very little sunlight to generate NOx from HNO3 was available, also causing the very low NO values. The good agreement between the observed and modeled NO profiles indicates that models can reproduce the photochemical and transport processes in the region where NO values have a sharp horizontal gradient. Moreover, CTM and TM model results show that even when the NOy gradients are weak, the model NO depends upon accurate calculation of the transport and insolation for several days.
NASA Astrophysics Data System (ADS)
Okuyama, Tadahiro
Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.
NASA Technical Reports Server (NTRS)
Kurucz, R. L.; Peytremann, E.
1975-01-01
The gf values for 265,587 atomic lines selected from the line data used to calculate line-blanketed model atmospheres are tabulated. These data are especially useful for line identification and spectral synthesis in solar and stellar spectra. The gf values are calculated semiempirically by using scaled Thomas-Fermi-Dirac radial wavefunctions and eigenvectors found through least-squares fits to observed energy levels. Included in the calculation are the first five or six stages of ionization for sequences up through nickel. Published gf values are included for elements heavier than nickel. The tabulation is restricted to lines with wavelengths less than 10 micrometers.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for a model type. 600.207-93 Section 600.207-93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model...
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for a model type. 600.207-86 Section 600.207-86 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model...
Ni, Y.; Ma, Q.; Ellis, G.S.; Dai, J.; Katz, B.; Zhang, S.; Tang, Y.
2011-01-01
Based on quantum chemistry calculations for normal octane homolytic cracking, a kinetic hydrogen isotope fractionation model for methane, ethane, and propane formation is proposed. The activation energy differences between D-substitute and non-substituted methane, ethane, and propane are 318.6, 281.7, and 280.2cal/mol, respectively. In order to determine the effect of the entropy contribution for hydrogen isotopic substitution, a transition state for ethane bond rupture was determined based on density function theory (DFT) calculations. The kinetic isotope effect (KIE) associated with bond rupture in D and H substituted ethane results in a frequency factor ratio of 1.07. Based on the proposed mathematical model of hydrogen isotope fractionation, one can potentially quantify natural gas thermal maturity from measured hydrogen isotope values. Calculated gas maturity values determined by the proposed mathematical model using ??D values in ethane from several basins in the world are in close agreement with similar predictions based on the ??13C composition of ethane. However, gas maturity values calculated from field data of methane and propane using both hydrogen and carbon kinetic isotopic models do not agree as closely. It is possible that ??D values in methane may be affected by microbial mixing and that propane values might be more susceptible to hydrogen exchange with water or to analytical errors. Although the model used in this study is quite preliminary, the results demonstrate that kinetic isotope fractionation effects in hydrogen may be useful in quantitative models of natural gas generation, and that ??D values in ethane might be more suitable for modeling than comparable values in methane and propane. ?? 2011 Elsevier Ltd.
Generation of calibrated tungsten target x-ray spectra: modified TBC model.
Costa, Paulo R; Nersissian, Denise Y; Salvador, Fernanda C; Rio, Patrícia B; Caldas, Linda V E
2007-01-01
In spite of the recent advances in the experimental detection of x-ray spectra, theoretical or semi-empirical approaches for determining realistic x-ray spectra in the range of diagnostic energies are important tools for planning experiments, estimating radiation doses in patients, and formulating radiation shielding models. The TBC model is one of the most useful approaches since it allows for straightforward computer implementation, and it is able to accurately reproduce the spectra generated by tungsten target x-ray tubes. However, as originally presented, the TBC model fails in situations where the determination of x-ray spectra produced by an arbitrary waveform or the calculation of realistic values of air kerma for a specific x-ray system is desired. In the present work, the authors revisited the assumptions used in the original paper published by . They proposed a complementary formulation for taking into account the waveform and the representation of the calculated spectra in a dosimetric quantity. The performance of the proposed model was evaluated by comparing values of air kerma and first and second half value layers from calculated and measured spectra by using different voltages and filtrations. For the output, the difference between experimental and calculated data was better then 5.2%. First and second half value layers presented differences of 23.8% and 25.5% in the worst case. The performance of the model in accurately calculating these data was better for lower voltage values. Comparisons were also performed with spectral data measured using a CZT detector. Another test was performed by the evaluation of the model when considering a waveform distinct of a constant potential. In all cases the model results can be considered as a good representation of the measured data. The results from the modifications to the TBC model introduced in the present work reinforce the value of the TBC model for application of quantitative evaluations in radiation physics.
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... may use fuel economy data from tests conducted on these vehicle configuration(s) at high altitude to...) Calculate the city, highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
KDEP: A resource for calculating particle deposition in the respiratory tract
Klumpp, John A.; Bertelli, Luiz
2017-08-01
This study presents KDEP, an open-source implementation of the ICRP lung deposition model developed by the authors. KDEP, which is freely available to the public, can be used to calculate lung deposition values under a variety of different conditions using the ICRP methodology. The paper describes how KDEP implements this model and discusses some key points of the implementation. The published lung deposition values for intakes by workers were reproduced, and new deposition values were calculated for intakes by members of the public. KDEP can be obtained for free at github.com or by emailing the authors directly.
NASA Astrophysics Data System (ADS)
Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej
2017-04-01
The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2018-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young’s moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YMf) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YMf was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YMf was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations. PMID:29399004
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2016-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young's moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YM f ) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YM f was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YM f was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations.
[The numerical Hatze-model: also qualified for calculations on children?].
Holley, Stephanie; Adamec, Jiri; Praxl, Norbert; Schönpflug, Markus; Graw, Matthias
2005-01-01
The aim of this study was to find out whether the Hatze-model, which is specifically designed for adults, is suitable for calculations on children as well. By means of that program it is possible to calculate various parameters of the human body. After the collection of data and analysis of the results according to Hatze it becomes evident that this model provides good results only for the calculation of the total body mass. As regards the body segments, there are significant under- and overestimations. The same applies to the calculation of mean body density. Indeed there is a significant gender dimorphism indicating that girls have a higher fraction of body fat than boys. However, the values are far below those described in the literature. Due to the formula, the values of the centres of gravity are linear and congruent in both sides of the body. Interpretation of the results is difficult, as there are no valid reference values. Furthermore the program is not able to take characteristic shapes and proportions of children into account. For this reason 88% of the children are defined as either pregnant or obese. In summary, the study shows that the present model should not be used to calculate children and the human models have to be designed specifically for children.
Mathematical modelling of risk reduction in reinsurance
NASA Astrophysics Data System (ADS)
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Wen, Xin-Xin; Xu, Chao; Zong, Chun-Lin; Feng, Ya-Fei; Ma, Xiang-Yu; Wang, Fa-Qi; Yan, Ya-Bo; Lei, Wei
2016-07-01
Micro-finite element (μFE) models have been widely used to assess the biomechanical properties of trabecular bone. How to choose a proper sample volume of trabecular bone, which could predict the real bone biomechanical properties and reduce the calculation time, was an interesting problem. Therefore, the purpose of this study was to investigate the relationship between different sample volumes and apparent elastic modulus (E) calculated from μFE model. 5 Human lumbar vertebral bodies (L1-L5) were scanned by micro-CT. Cubic concentric samples of different lengths were constructed as the experimental groups and the largest possible volumes of interest (VOI) were constructed as the control group. A direct voxel-to-element approach was used to generate μFE models and steel layers were added to the superior and inferior surface to mimic axial compression tests. A 1% axial strain was prescribed to the top surface of the model to obtain the E values. ANOVA tests were performed to compare the E values from the different VOIs against that of the control group. Nonlinear function curve fitting was performed to study the relationship between volumes and E values. The larger cubic VOI included more nodes and elements, and more CPU times were needed for calculations. E values showed a descending tendency as the length of cubic VOI decreased. When the volume of VOI was smaller than (7.34mm(3)), E values were significantly different from the control group. The fit function showed that E values approached an asymptotic values with increasing length of VOI. Our study demonstrated that apparent elastic modulus calculated from μFE models were affected by the sample volumes. There was a descending tendency of E values as the length of cubic VOI decreased. Sample volume which was not smaller than (7.34mm(3)) was efficient enough and timesaving for the calculation of E. Copyright © 2016 Elsevier Ltd. All rights reserved.
Calculations of Hubbard U from first-principles
NASA Astrophysics Data System (ADS)
Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.
2006-09-01
The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.
A battery power model for the EUVE spacecraft
NASA Technical Reports Server (NTRS)
Yen, Wen L.; Littlefield, Ronald G.; Mclean, David R.; Tuchman, Alan; Broseghini, Todd A.; Page, Brenda J.
1993-01-01
This paper describes a battery power model that has been developed to simulate and predict the behavior of the 50 ampere-hour nickel-cadmium battery that supports the Extreme Ultraviolet Explorer (EUVE) spacecraft in its low Earth orbit. First, for given orbit, attitude, solar array panel and spacecraft load data, the model calculates minute-by-minute values for the net power available for charging the battery for a user-specified time period (usually about two weeks). Next, the model is used to calculate minute-by-minute values for the battery voltage, current and state-of-charge for the time period. The model's calculations are explained for its three phases: sunrise charging phase, constant voltage phase, and discharge phase. A comparison of predicted model values for voltage, current and state-of-charge with telemetry data for a complete charge-discharge cycle shows good correlation. This C-based computer model will be used by the EUVE Flight Operations Team for various 'what-if' scheduling analyses.
40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Sample calculation of fuel economy... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Calculating Fuel Economy and Carbon-Related Exhaust Emission Values for 1977 and Later Model...
An Improved Shock Model for Bare and Covered Explosives
NASA Astrophysics Data System (ADS)
Scholtes, Gert; Bouma, Richard
2017-06-01
TNO developed a toolbox to estimate the probability of a violent event on a ship or other platform, when the munition bunker is hit by e.g. a bullet or fragment from a missile attack. To obtain the proper statistical output, several millions of calculations are needed to obtain a reliable estimate. Because millions of different scenarios have to be calculated, hydrocode calculations cannot be used for this type of application, but a fast and good engineering solutions is needed. At this moment the Haskins and Cook-model is used for this purpose. To obtain a better estimate for covered explosives and munitions, TNO has developed a new model which is a combination of the shock wave model at high pressure, as described by Haskins and Cook, in combination with the expanding shock wave model of Green. This combined model gives a better fit with the experimental values for explosives response calculations, using the same critical energy fluence values for covered as well as for bare explosives. In this paper the theory is explained and results of the calculations for several bare and covered explosives will be presented. To show this, the results will be compared with the experimental values from literature for composition B, Composition B-3 and PBX-9404.
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
Tree value system: description and assumptions.
D.G. Briggs
1989-01-01
TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...
The Lα (λ = 121.6 nm) solar plage contrasts calculations.
NASA Astrophysics Data System (ADS)
Bruevich, E. A.
1991-06-01
The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.
NASA Astrophysics Data System (ADS)
Krissinel, Boris
2018-03-01
The paper reports the results of calculations of the center-to-limb intensity of optically thin line emission in EUV and FUV wavelength ranges. The calculations employ a multicomponent model for the quiescent solar corona. The model includes a collection of loops of various sizes, spicules, and free (inter-loop) matter. Theoretical intensity values are found from probabilities of encountering parts of loops in the line of sight with respect to the probability of absence of other coronal components. The model uses 12 loops with sizes from 3200 to 210000 km with different values of rarefaction index and pressure at the loop base and apex. The temperature at loop apices is 1 400 000 K. The calculations utilize the CHIANTI database. The comparison between theoretical and observed emission intensity values for coronal and transition region lines obtained by the SUMER, CDS, and EIS telescopes shows quite satisfactory agreement between them, particularly for the solar disk center. For the data acquired above the limb, the enhanced discrepancies after the analysis refer to errors in EIS measurements.
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less
VHDL-AMS modelling and simulation of a planar electrostatic micromotor
NASA Astrophysics Data System (ADS)
Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.
2003-09-01
System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostou, T; Papadimitroulas, P; Kagadis, GC
2014-06-15
Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less
40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of vehicle-specific 5-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...
Polarizability calculations on water, hydrogen, oxygen, and carbon dioxide
NASA Technical Reports Server (NTRS)
Nir, S.; Adams, S.; Rein, R.
1973-01-01
A semiclassical model of damped oscillators is used as a basis for the calculation of the dispersion of the refractive index, polarizability, and dielectric permeability in water, hydrogen, and oxygen in liquid and gaseous states, and in gaseous carbon dioxide. The absorption coefficient and the imaginary part of the refractive index are also calculated at corresponding wavelengths. A good agreement is obtained between the observed and calculated values of refractive indices, and between those of absorption coefficients in the region of absorption bands. The calculated values of oscillator strengths and damping factors are also discussed. The value of the polarizability of liquid water was about 2.8 times that of previous calculations.
Influence of structural parameters of deep groove ball bearings on vibration
NASA Astrophysics Data System (ADS)
Yu, Guangwei; Wu, Rui; Xia, Wei
2018-04-01
Taking 6201 bearing as the research object, a dynamic model of 4 degrees of freedom is established to solve the vibration characteristics such as the displacement, velocity and acceleration of deep groove ball bearings by MATLAB and Runge-Kutta method. By calculating the theoretical value of the frequency of the rolling element passing through the outer ring and the simulation value of the model, it can be known that the theoretical calculation value and the simulation value have good consistency. By the experiments, the measured values and simulation values are consistent. Using the mathematical model, the effect of structural parameters on vibration is obtained. The method in the paper is testified to be feasible and the results can be used as references for the design, manufacturing and testing of deep groove ball bearings.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Results of the degradation kinetics project and describes a general approach for calculating and selecting representative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.
40 CFR 600.208-77 - Sample calculation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-77 Sample calculation...
A rough set-based measurement model study on high-speed railway safety operation.
Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun
2018-01-01
Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.
NASA Astrophysics Data System (ADS)
Widyawan, A.; Pasaribu, U. S.; Henintyas, Permana, D.
2015-12-01
Nowadays some firms, including insurer firms, think that customer-centric services are better than product-centric ones in terms of marketing. Insurance firms will try to attract as many new customer as possible while maintaining existing customer. This causes the Customer Lifetime Value (CLV) becomes a very important thing. CLV are able to put customer into different segments and calculate the present value of a firm's relationship with its customer. Insurance customer will depend on the last service he or she can get. So if the service is bad now, then customer will not renew his contract though the service is very good at an erlier time. Because of this situation one suitable mathematical model for modeling customer's relationships and calculating their lifetime value is Markov Chain. In addition, the advantages of using Markov Chain Modeling is its high degree of flexibility. In 2000, Pfeifer and Carraway states that Markov Chain Modeling can be used for customer retention situation. In this situation, Markov Chain Modeling requires only two states, which are present customer and former ones. This paper calculates customer lifetime value in an insurance firm with two distinctive interest rates; the constant interest rate and uniform distribution of interest rates. The result shows that loyal customer and the customer who increase their contract value have the highest CLV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angelini, G.; Lanza, E.; Rozza Dionigi, A.
1983-05-01
The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less
Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buck, Edgar C.; Jerden, James L.; Ebert, William L.
The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2011 CFR
2011-07-01
... vehicle configuration 5-cycle fuel economy values as determined in § 600.207-08 for low-altitude tests. (1... economy data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... city and highway fuel economy values from the tests performed using gasoline or diesel test fuel. (ii...
Code of Federal Regulations, 2011 CFR
2011-07-01
... emission data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... values from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as..., highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed...
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Sharma, Ity; Kaminski, George A.
2012-01-01
We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192
Applicability of ASHRAE clear-sky model based on solar-radiation measurements in Saudi Arabia
NASA Astrophysics Data System (ADS)
Abouhashish, Mohamed
2017-06-01
The constants of the ASHRAE clear sky model predict high values of the hourly beam radiation and very low values of the hourly diffuse radiation when used for locations in Saudi Arabia. Eight measurement stations in different locations are used to obtain new clearness factors for the model. The procedure depends on the comparison of monthly direct normal radiation (DNI) and diffuse horizontal radiation (DHI) between the measurement and the calculated values. Two factors are obtained CNb, CNd for every month to adjust the calculated clear sky radiation in order to consider the effects of local weather conditions. A simple and practical simulation model for solar geometry is designed using Microsoft Visual Basic platform, the model simulates the solar angles and radiation components according to ASHRAE model. The comparison of the calculated data with the first year of measurements indicate that the attenuation of site clearness is variable across the locations and from month to month, showing the clearest skies in the north and northwestern parts of the Kingdom especially during summer months.
FDTD calculations of SAR for child voxel models in different postures between 10 MHz and 3 GHz.
Findlay, R P; Lee, A-K; Dimbylow, P J
2009-08-01
Calculations of specific energy absorption rate (SAR) have been performed on the rescaled NORMAN 7-y-old voxel model and the Electronics and Telecommunications Research Institute (ETRI) child 7-y-old voxel model in the standing arms down, arms up and sitting postures. These calculations were for plane-wave exposure under isolated and grounded conditions between 10 MHz and 3 GHz. It was found that there was little difference at each resonant frequency between the whole-body averaged SAR values calculated for the NORMAN and ETRI 7-y-old models for each of the postures studied. However, when compared with the arms down posture, raising the arms increased the SAR by up to 25%. Electric field values required to produce the International Commission on Non-Ionizing Radiation Protection and Institute of Electrical and Electronic Engineers public basic restriction were calculated, and compared with reference levels for the different child models and postures. These showed that, under certain worst-case exposure conditions, the reference levels may not be conservative.
NASA Technical Reports Server (NTRS)
Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.
1995-01-01
The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.
EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid
NASA Astrophysics Data System (ADS)
Kripal, Ram; Singh, Manju
2015-12-01
Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
Gas holdup time (tM) is a basic parameter in isothermal gas chromatography (GC). Determination and evaluation of tM and retention behaviors of n-alkanes under isothermal GC conditions have been extensively studied since the 1950s, but still remains unresolved. The difference equation (DE) model [J. Chromatogr. A 1260:215–223] reveals retention behaviors of n-alkanes excluding tM, while the quadratic equation (QE) model [J. Chromatogr. A 1260:224–231] including tM is suitable for applications. In the present study, tM values were calculated with the QE model, which is referred to as tMT, evaluated and compared with other three typical nonlinear models. The QE model gives an accurate estimation of tM in isothermal GC. The tMT values are highly accurate, stable, and easy to calculate and use. There is only one tMT value at each GC condition. The proper classification of tM values can clarify their disagreement and facilitate GC retention data standardization for which tMT values are promising reference tM values. PMID:23726077
DART model for thermal conductivity of U{sub 3}Si{sub 2} aluminum dispersion fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J.; Snelgrove, J.L.; Hofman, G.L.
1995-09-01
This paper describes the primary physical models that form the basis of the DART model for calculating irradiation-induced changes in the thermal conductivity of aluminium dispersion fuel. DART calculations of fuel swelling, pore closure, and thermal conductivity are compared with measured values.
Model for economic evaluation of high energy gas fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engi, D.
1984-05-01
The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Assessing the Value of Information of Geophysical Data For Groundwater Management
NASA Astrophysics Data System (ADS)
Trainor, W. J.; Caers, J. K.; Mukerji, T.; Auken, E.; Knight, R. J.
2008-12-01
Effective groundwater management requires hydrogeologic models informed by various data sources. The long-term goal of our research is to develop methodologies that quantify the value of information (VOI) of geophysical data for water managers. We present an initial sensitivity study on assessing the reliability of airborne electro-magnetic (EM) data for detecting channel orientation. The reliability results are used to calculate VOI regarding decisions of artificial recharge to mitigate seawater intrusion. To demonstrate how a hydrogeologic problem can be framed in decision analysis terms, a hypothetical example is built, where water managers are considering artificial recharge to remediate seawater intrusion. Is the cost of recharge justified given the large uncertainty of subsurface heterogeneity that may interfere in a successful recharge? Thus, the decision is should recharge be performed, and if yes, where should recharge wells be located? This decision is difficult because of the large uncertainty of the aquifer heterogeneity that influences flow. The expected value of all possible outcomes to the decision without gathering additional EM information is the prior value VPRIOR. The value of information (VOI) is calculated as the expected gain in value after including the relevant new information, or the difference between the value after a free experiment (VFE) and the value prior (VPRIOR): VOI = VFE - VPRIOR Airborne EM has been used to detect confining clay layers and flow barriers. However, geophysical information rarely identifies the subsurface perfectly. Many challenges impact data quality and the resulting models (interpretation uncertainty). To evaluate how well airborne EM data detect the orientation of subsurface channel systems, 125 alternative binary, fluvial lithology models are generated, each categorized into one of three subsurface scenarios: northwest, southwest and mixed channel orientation. Using rock property relations, the lithology models are converted into electrical resistivity models for EM forward modeling, to generate time-domain EM data. Noise is added to the late times of the EM data to better represent typical airborne acquisition. Inversions are performed to obtain 125 inverted resistivity images. From the images, we calculate the angle of maximum spatial correlation at every cell, and compare it with the truth - the original lithology model. These synthetic models serve as a proxy to estimate misclassification probabilities of channel orientation from actual EM data. The misclassification probabilities are then used in the VOI calculations. Results are presented demonstrating how the reliability measure and the pumping schedule can impact VOI. Lastly, reliability and VOI are calculated and compared for land-based EM data, which has different spatial sampling and resolution than air-borne data.
Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler
2016-12-01
An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.
40 CFR 600.209-95 - Calculation of fuel economy values for labeling.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Multiply the city model type fuel economy calculated from the tests performed using gasoline or diesel test... (B) Multiply the city model type fuel economy calculated from the tests performed using alcohol or natural gas test fuel as determined in § 600.207 (b)(5)(ii) by 0.90, rounding the product to the nearest...
NASA Astrophysics Data System (ADS)
Atıcı, Ramazan; Sağır, Selçuk
2017-07-01
In the present work, the relationship with QBO of difference (ΔfoE = foEmea - foEIRI) between critical frequency (foE) values of ionospheric E-region, measured at Darwin and Casos Island stations and calculated by IRI-2012 ionospheric model, is statistically investigated. A multiple regression model is used as statistical tool. The ;Dummy; variables (;DummyWest; and ;DummyEast; represent westerly QBO values and easterly QBO values, respectively) are added to model in order to see the effect of westerly and easterly QBO. In the result of calculations, it is observed that the changes in ΔfoE about 50-52% can be explained by QBO at both stations. The relationship between QBO and ΔfoE is negative at both stations. The change of 1 ms-1 in whole set of QBO leads to a decrease of 0.008 MHz at Casos Island station and 0.017 MHz at Darwin station in ΔfoE. Directions of QBO have an effect on ΔfoE at the Darwin station, but they've not any effect on ΔfoE at Casos Island station. It is thought that the difference values in the foE are due to not to be included in the IRI-model of all parameters affecting the critical frequency value. Thus, QBO which is not included to IRI-model can have an effect on foE and more accurate results can be obtained by IRI model if the QBO is included in this model calculations.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Schoeberl, M. R.; Plumb, R. A.
1986-01-01
Calculations of the two-dimensional, species-independent mixing coefficients for two-dimensional chemical models for the troposphere and stratosphere are performed using quasi-geostrophic potential vorticity fluxes and gradients from 4 years of National Meteorological Center data for the four seasons in both hemispheres. Results show that the horizontal mixing coefficient values for the winter lower stratosphere are broadly consistent with those currently employed in two-dimensional models, but the horizontal mixing coefficient values in the northern winter upper stratosphere are much larger than those usually used.
Development of a nuclear technique for monitoring water levels in pressurized vehicles
NASA Technical Reports Server (NTRS)
Singh, J. J.; Davis, W. T.; Mall, G. H.
1983-01-01
A new technique for monitoring water levels in pressurized stainless steel cylinders was developed. It is based on differences in attenuation coefficients of water and air for Cs137 (662 keV) gamma rays. Experimentally observed gamma ray counting rates with and without water in model reservoir cylinder were compared with corresponding calculated values for two different gamma ray detection theshold energies. Calculated values include the effects of multiple scattering and attendant gamma ray energy reductions. The agreement between the measured and calculated values is reasonably good. Computer programs for calculating angular and spectral distributions of scattered radition in various media are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, W. B.; Militzer, B.
In anticipation of new observational results for Jupiter's axial moment of inertia and gravitational zonal harmonic coefficients from the forthcoming Juno orbiter, we present a number of preliminary Jupiter interior models. We combine results from ab initio computer simulations of hydrogen–helium mixtures, including immiscibility calculations, with a new nonperturbative calculation of Jupiter's zonal harmonic coefficients, to derive a self-consistent model for the planet's external gravity and moment of inertia. We assume helium rain modified the interior temperature and composition profiles. Our calculation predicts zonal harmonic values to which measurements can be compared. Although some models fit the observed (pre-Juno) second-more » and fourth-order zonal harmonics to within their error bars, our preferred reference model predicts a fourth-order zonal harmonic whose absolute value lies above the pre-Juno error bars. This model has a dense core of about 12 Earth masses and a hydrogen–helium-rich envelope with approximately three times solar metallicity.« less
NASA Astrophysics Data System (ADS)
Trojková, Darina; Judas, Libor; Trojek, Tomáš
2014-11-01
Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.
Martian Radiation Environment: Model Calculations and Recent Measurements with "MARIE"
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Cucinotta, F. A.; zeitlin, C. J.; Cleghorn, T. F.
2004-01-01
The Galactic Cosmic Ray spectra in Mars orbit were generated with the recently expanded HZETRN (High Z and Energy Transport) and QMSFRG (Quantum Multiple-Scattering theory of nuclear Fragmentation) model calculations. These model calculations are compared with the first eighteen months of measured data from the MARIE (Martian Radiation Environment Experiment) instrument onboard the 2001 Mars Odyssey spacecraft that is currently in Martian orbit. The dose rates observed by the MARIE instrument are within 10% of the model calculated predictions. Model calculations are compared with the MARIE measurements of dose, dose-equivalent values, along with the available particle flux distribution. Model calculated particle flux includes GCR elemental composition of atomic number, Z = 1-28 and mass number, A = 1-58. Particle flux calculations specific for the current MARIE mapping period are reviewed and presented.
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage.
Xia, Runchuan; Zhou, Jianting; Zhang, Hong; Liao, Leng; Zhao, Ruiqiang; Zhang, Zeyu
2018-05-02
This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL) signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model) were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values ( B xL ( x,z ) curves) agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.
NASA Astrophysics Data System (ADS)
Perrot, Y.; Degoul, F.; Auzeloux, P.; Bonnet, M.; Cachin, F.; Chezal, J. M.; Donnarieix, D.; Labarre, P.; Moins, N.; Papon, J.; Rbah-Vidal, L.; Vidal, A.; Miot-Noirault, E.; Maigne, L.
2014-05-01
The GATE Monte Carlo simulation platform based on the Geant4 toolkit is under constant improvement for dosimetric calculations. In this study, we explore its use for the dosimetry of the preclinical targeted radiotherapy of melanoma using a new specific melanin-targeting radiotracer labeled with iodine 131. Calculated absorbed fractions and S values for spheres and murine models (digital and CT-scan-based mouse phantoms) are compared between GATE and EGSnrc Monte Carlo codes considering monoenergetic electrons and the detailed energy spectrum of iodine 131. The behavior of Geant4 standard and low energy models is also tested. Following the different authors’ guidelines concerning the parameterization of electron physics models, this study demonstrates an agreement of 1.2% and 1.5% with EGSnrc, respectively, for the calculation of S values for small spheres and mouse phantoms. S values calculated with GATE are then used to compute the dose distribution in organs of interest using the activity distribution in mouse phantoms. This study gives the dosimetric data required for the translation of the new treatment to the clinic.
Convenient models of the atmosphere: optics and solar radiation
NASA Astrophysics Data System (ADS)
Alexander, Ginsburg; Victor, Frolkis; Irina, Melnikova; Sergey, Novikov; Dmitriy, Samulenkov; Maxim, Sapunov
2017-11-01
Simple optical models of clear and cloudy atmosphere are proposed. Four versions of atmospheric aerosols content are considered: a complete lack of aerosols in the atmosphere, low background concentration (500 cm-3), high concentrations (2000 cm-3) and very high content of particles (5000 cm-3). In a cloud scenario, the model of external mixture is assumed. The values of optical thickness and single scattering albedo for 13 wavelengths are calculated in the short wavelength range of 0.28-0.90 µm, with regard to the molecular absorption bands, that is simulated with triangle function. A comparison of the proposed optical parameters with results of various measurements and retrieval (lidar measurement, sampling, processing radiation measurements) is presented. For a cloudy atmosphere models of single-layer and two-layer atmosphere are proposed. It is found that cloud optical parameters with assuming the "external mixture" agrees with retrieved values from airborne observations. The results of calculating hemispherical fluxes of the reflected and transmitted solar radiation and the radiative divergence are obtained with the Delta-Eddington approach. The calculation is done for surface albedo values of 0, 0.5, 0.9 and for spectral values of the sandy surface. Four values of solar zenith angle: 0°, 30°, 40° and 60° are taken. The obtained values are compared with data of radiative airborne observations. Estimating the local instantaneous radiative forcing of atmospheric aerosols and clouds for considered models is presented together with the heating rate.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
Detailed Modeling and Analysis of the CPFM Dataset
NASA Technical Reports Server (NTRS)
Swartz, William H.; Lloyd, Steven A.; DeMajistre, Robert
2004-01-01
A quantitative understanding of photolysis rate coefficients (or "j-values") is essential to determining the photochemical reaction rates that define ozone loss and other crucial processes in the atmosphere. j-Values can be calculated with radiative transfer models, derived from actinic flux observations, or inferred from trace gas measurements. The principal objective of this study is to cross-validate j-values from the Composition and Photodissociative Flux Measurement (CPFM) instrument during the Photochemistry of Ozone Loss in the Arctic Region In Summer (POLARIS) and SAGE I11 Ozone Loss and Validation Experiment (SOLVE) field campaigns with model calculations and other measurements and to use this detailed analysis to improve our ability to determine j-values. Another objective is to analyze the spectral flux from the CPFM (not just the j-values) and, using a multi-wavelength/multi-species spectral fitting technique, determine atmospheric composition.
NASA Astrophysics Data System (ADS)
Zhou, Zhifang; Lin, Mu; Guo, Qiaona; Chen, Meng
2018-05-01
The hydrogeological characteristics of structural planes are different to those of the associated bedrock. The permeability, and therefore hydraulic conductivity (K), of a structural plane can be significantly different at different scales. The interlayer staggered zones in the Emeishan Basalt of early Late Permian were studied; this formation is located in the Baihetan hydropower project area in Jinsha River Basin, China. The seepage flow distribution of a solid model and two generalized models (A and B) were computed using COMSOL. The K values of the interlayer staggered zones for all three models were calculated by both simulation and analytical methods. The results show that the calculated K results of the generalized models can reflect the variation trend of permeability in each section of the solid model, and the approximate analytical calculation of K can be taken into account in the calculation of K in the generalized models instead of that found by simulation. Further studies are needed to investigate permeability variation in the interlayer staggered zones under the condition of different scales, considering the scaling variation in each section of an interlayer staggered zone. The permeability of each section of an interlayer staggered zone presents a certain degree of dispersivity at small scales; however, the permeability values tends to converge to a similar value as the scale of each section increases. The regularity of each section of the interlayer staggered zones under the condition of different scales can provide a scientific basis for reasonable selection of different engineering options.
Wartime Medical Requirements Models: A Comparison of MPM, MEPES, and LPX-MED.
1996-10-01
theater-level models: • Medical Planning Module (MPM) • Medical Planning and Execution System (MEPES) • External Logistics Processor-Medical Module ...current plan is to modify LPX-MED to include a requirements calculator, there is no plan to link the require- ments calculation module and the...simulation module . We believe the simulation module (i.e., today’s LPX-MED) needs reasonable starting values, which a calculator model can pro- vide
Code of Federal Regulations, 2012 CFR
2012-07-01
... city and highway fuel economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) If 5-cycle testing was performed on the alcohol or natural gas test fuel, calculate the city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural...
Code of Federal Regulations, 2014 CFR
2014-07-01
... city and highway fuel economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) If 5-cycle testing was performed on the alcohol or natural gas test fuel, calculate the city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural...
Code of Federal Regulations, 2013 CFR
2013-07-01
... city and highway fuel economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) If 5-cycle testing was performed on the alcohol or natural gas test fuel, calculate the city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural...
Analytical effective tensor for flow-through composites
Sviercoski, Rosangela De Fatima [Los Alamos, NM
2012-06-19
A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.
Thermal history of Bakken shale in Williston basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosnold, W.D. Jr.; Lefever, R.D.; Crashell, J.J.
1989-12-01
Stratigraphic and thermal conductivity data were combined to analyze the thermostratigraphy of the Williston basin. The present thermostratigraphy is characterized by geothermal gradients of the order of 60 mK/m in the Cenozoic and Mesozoic units, and 30 mK/m in the Paleozoic units. The differences in geothermal gradients are due to differences in thermal conductivities between the shale-dominated Mesozoic and Cenozoic units and the carbonate-dominated Paleozoic units. Subsidence and compaction rates were calculated for the basin and were used to determine models for time vs. depth and time vs. thermal conductivity relationships for the basin. The time/depth and time/conductivity relationships includemore » factors accounting for thermal conductivity changes due to compaction, cementation, and temperature. The thermal history of the Bakken shale, a primary oil source rock in the Williston basin, was determined using four different models, and values for Lopatin's time-temperature index (TTI) were calculated for each model. The first model uses a geothermal gradient calculated from bottom-hole temperature data, the second uses present-day thermostratigraphy, the third uses the thermostratigraphic relationship determined in this analysis, and the fourth modifies the third by including assumed variations in continental heat flow. The thermal histories and the calculated TTI values differ markedly among the models with TTI values differing by a factor of about two between some models.« less
An extension of ASM2d including pH calculation.
Serralta, J; Ferrer, J; Borrás, L; Seco, A
2004-11-01
This paper presents an extension of the Activated Sludge Model No. 2d (ASM2d) including a chemical model able to calculate the pH value in biological processes. The developed chemical model incorporates the complete set of chemical species affecting the pH value to ASM2d describing non-equilibrium biochemical processes. It considers the system formed by one aqueous phase, in which biochemical processes take place, and one gaseous phase, and is based on the assumptions of instantaneous chemical equilibrium under liquid phase and kinetically governed mass transport between the liquid and gas phase. The ASM2d enlargement comprises the addition of every component affecting the pH value and an ion-balance for the calculation of the pH value and the dissociation species. The significant pH variations observed in a sequencing batch reactor operated for enhanced biological phosphorus removal were used to verify the capability of the extended model for predicting the dynamics of pH jointly with concentrations of acetic acid and phosphate. A pH inhibition function for polyphosphate accumulating bacteria has also been included in the model to simulate the behaviour observed. Experimental data obtained in four different experiments (with different sludge retention time and influent phosphorus concentrations) were accurately reproduced.
Crack propagation modelling for high strength steel welded structural details
NASA Astrophysics Data System (ADS)
Mecséri, B. J.; Kövesdi, B.
2017-05-01
Nowadays the barrier of applying HSS (High Strength Steel) material in bridge structures is their low fatigue strength related to yield strength. This paper focuses on the fatigue behaviour of a structural details (a gusset plate connection) made from NSS and HSS material, which is frequently used in bridges in Hungary. An experimental research program is carried out at the Budapest University of Technology and Economics to investigate the fatigue lifetime of this structural detail type through the same test specimens made from S235 and S420 steel grades. The main aim of the experimental research program is to study the differences in the crack propagation and the fatigue lifetime between normal and high strength steel structures. Based on the observed fatigue crack pattern the main direction and velocity of the crack propagation is determined. In parallel to the tests finite element model (FEM) are also developed, which model can handle the crack propagation. Using the measured strain data in the tests and the calculated values from the FE model, the approximation of the material parameters of the Paris law are calculated step-by-step, and their calculated values are evaluated. The same material properties are determined for NSS and also for HSS specimens as well, and the differences are discussed. In the current paper, the results of the experiments, the calculation method of the material parameters and the calculated values are introduced.
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
Effects of damping on mode shapes, volume 2
NASA Technical Reports Server (NTRS)
Gates, R. M.; Merchant, D. H.; Arnquist, J. L.
1977-01-01
Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Data values for the finite damping elements were assigned to duplicate overall low-frequency modal damping values taken from tests of similar vehicles. For comparison with the calculated admittances, position and rate gains were computed for a conventional shuttle model for the liftoff condition. Dynamic characteristics and admittances for the space shuttle model are presented.
NASA Astrophysics Data System (ADS)
Tarao, Hiroo; Hayashi, Noriyuki; Hamamoto, Isao; Isaka, Katsuo
A numerical method, which is newly developed here, is used in order to calculate internal body resistances in a voxelized biological model. By using this method, the internal resistances of an anatomical human model were calculated for the two current paths: 1400 Ω for a hand to foot, and 1500 Ω for a hand to hand. They are compared with experimental ones (500 ∼ 600 Ω for the hand to foot and 500 ∼ 700 Ω for the hand to hand), resulting in the conclusion that the numerical values of the internal resistance are twice or three times higher than the experimental ones. While there is the discrepancy between the calculated and measured results in the absolute values, the profiles of their relative values along the current paths showed good agreement. This implies that the factors such as the anisotropy of muscle conductivity and the difference between in vivo and in vitro conductivities need to be considered. In fact, in consideration of those factors, the calculated results approached the experimental ones.
Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts
Wang, Xianlong; Wang, Chengfei; Zhao, Hui
2012-01-01
Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134
Virial Coefficients for the Liquid Argon
NASA Astrophysics Data System (ADS)
Korth, Micheal; Kim, Saesun
2014-03-01
We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.
Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.
Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos
2014-01-01
Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.
Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models
NASA Astrophysics Data System (ADS)
Song, Shaojie; Gao, Meng; Xu, Weiqi; Shao, Jingyuan; Shi, Guoliang; Wang, Shuxiao; Wang, Yuxuan; Sun, Yele; McElroy, Michael B.
2018-05-01
pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA) and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic) to as high as 7 (neutral). In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode) are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between -2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase) measurements as inputs (i.e., forward mode) are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5) for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid) or metastable (only liquid), does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption) resulted from coding errors in the model, which have been identified and fixed in this study.
A table of semiempirical gf values. Part 2. Wavelengths: 272. 3395 nm to 599. 3892 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurucz, R.L.; Peytremann, E.
1975-02-14
The gf values for 265,587 atomic lines selectedfrom the line data used to calculate line blanketed model atmospheres are tabulated. These data are especially useful for line identification and spectral synthesis in solar and stellar spectra. The gf values are calculated semiempirically by using scaled Thomas--Fermi--Dirac radial wave functions and eigenvectors found through least-squares fits to observed energy levels. Included in the calculation are the first five or six stages of ionization for sequences up through nickel. Published gf values are included for elements heavier than nickel. The tabulation is restricted to lines with wavelengths less than 10 micrometers. (auth)
40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy values for labeling. 600.211-08 Section 600.211-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model...
40 CFR 600.209-95 - Calculation of fuel economy values for labeling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for labeling. 600.209-95 Section 600.209-95 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year...
40 CFR 600.209-85 - Calculation of fuel economy values for labeling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for labeling. 600.209-85 Section 600.209-85 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year...
40 CFR 600.210-08 - Calculation of fuel economy values for labeling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of fuel economy values for labeling. 600.210-08 Section 600.210-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year...
NASA Astrophysics Data System (ADS)
Madhavi Latha, T.; Peddi Naidu, P.; Madhusudhana Rao, D. N.; Indira Devi, M.
2012-11-01
Electron density profiles for the International Reference Ionosphere (IRI) 2001 and 2007 models have been utilized in evaluating the D-region conductivity parameter in earth ionosphere wave guide calculations. The day to night shift in reflection height of very low frequency (VLF) waves has been calculated using D-region conductivities derived from IRI models and the results are compared with those obtained from phase variation measurements of VLF transmissions from Rugby (England) made at Visakhapatnam (India). The values derived from the models are found to be much lower than those obtained from the experimental measurements. The values derived from the IRI models are in good agreement with those obtained from exponential conductivity model.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.
Hedin, Emma; Bäck, Anna
2013-09-06
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.
Sławuta, P; Glińska-Suchocka, K; Cekiera, A
2015-01-01
Apart from the HH equation, the acid-base balance of an organism is also described by the Stewart model, which assumes that the proper insight into the ABB of the organism is given by an analysis of: pCO2, the difference of concentrations of strong cations and anions in the blood serum - SID, and the total concentration of nonvolatile weak acids - Acid total. The notion of an anion gap (AG), or the apparent lack of ions, is closely related to the acid-base balance described according to the HH equation. Its value mainly consists of negatively charged proteins, phosphates, and sulphates in blood. In the human medicine, a modified anion gap is used, which, including the concentration of the protein buffer of blood, is, in fact, the combination of the apparent lack of ions derived from the classic model and the Stewart model. In brachycephalic dogs, respiratory acidosis often occurs, which is caused by an overgrowth of the soft palate, making it impossible for a free air flow and causing an increase in pCO2--carbonic acid anhydride The aim of the present paper was an attempt to answer the question whether, in the case of systemic respiratory acidosis, changes in the concentration of buffering ions can also be seen. The study was carried out on 60 adult dogs of boxer breed in which, on the basis of the results of endoscopic examination, a strong overgrowth of the soft palate requiring a surgical correction was found. For each dog, the value of the anion gap before and after the palate correction procedure was calculated according to the following equation: AG = ([Na+ mmol/l] + [K+ mmol/l])--([Cl- mmol/l]+ [HCO3- mmol/l]) as well as the value of the modified AG--according to the following equation: AGm = calculated AG + 2.5 x (albumins(r)--albumins(d)). The values of AG calculated for the dogs before and after the procedure fell within the limits of the reference values and did not differ significantly whereas the values of AGm calculated for the dogs before and after the procedure differed from each other significantly. 1) On the basis of the values of AGm obtained it should be stated that in spite of finding respiratory acidosis in the examined dogs, changes in ion concentration can also be seen, which, according to the Stewart theory, compensate metabolic ABB disorders 2) In spite of the fact that all the values used for calculation of AGm were within the limits of reference values, the values of AGm in dogs before and after the soft palate correction procedure differed from each other significantly, which proves high sensitivity and usefulness of the AGm calculation as a diagnostic method.
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values-- that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed t...
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Bozkurt, Hayriye; D'Souza, Doris H; Davidson, P Michael
2014-05-01
Hepatitis A virus (HAV) is a food-borne enteric virus responsible for outbreaks of hepatitis associated with shellfish consumption. The objectives of this study were to determine the thermal inactivation behavior of HAV in blue mussels, to compare the first-order and Weibull models to describe the data, to calculate Arrhenius activation energy for each model, and to evaluate model efficiency by using selected statistical criteria. The times required to reduce the population by 1 log cycle (D-values) calculated from the first-order model (50 to 72°C) ranged from 1.07 to 54.17 min for HAV. Using the Weibull model, the times required to destroy 1 log unit (tD = 1) of HAV at the same temperatures were 1.57 to 37.91 min. At 72°C, the treatment times required to achieve a 6-log reduction were 7.49 min for the first-order model and 8.47 min for the Weibull model. The z-values (changes in temperature required for a 90% change in the log D-values) calculated for HAV were 15.88 ± 3.97°C (R(2), 0.94) with the Weibull model and 12.97 ± 0.59°C (R(2), 0.93) with the first-order model. The calculated activation energies for the first-order model and the Weibull model were 165 and 153 kJ/mol, respectively. The results revealed that the Weibull model was more appropriate for representing the thermal inactivation behavior of HAV in blue mussels. Correct understanding of the thermal inactivation behavior of HAV could allow precise determination of the thermal process conditions to prevent food-borne viral outbreaks associated with the consumption of contaminated mussels.
Amoush, Ahmad; Wilkinson, Douglas A.
2015-01-01
This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage 125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125 125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage 125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the 125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125 125I seed. PACS number: 87.55.K‐ PMID:26699577
NASA Astrophysics Data System (ADS)
Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Li, Junli
2017-09-01
Mean glandular dose (MGD) is not only determined by the compressed breast thickness (CBT) and the glandular content, but also by the distribution of glandular tissues in breast. Depth dose inside the breast in mammography has been widely concerned as glandular dose decreases rapidly with increasing depth. In this study, an experiment using thermo luminescent dosimeters (TLDs) was carried out to validate Monte Carlo simulations of mammography. Percent depth doses (PDDs) at different depth values were measured inside simple breast phantoms of different thicknesses. The experimental values were well consistent with the values calculated by Geant4. Then a detailed breast model with a CBT of 4 cm and a glandular content of 50%, which has been constructed in previous work, was used to study the effects of the distribution of glandular tissues in breast with Geant4. The breast model was reversed in direction of compression to get a reverse model with a different distribution of glandular tissues. Depth dose distributions and glandular tissue dose conversion coefficients were calculated. It revealed that the conversion coefficients were about 10% larger when the breast model was reversed, for glandular tissues in the reverse model are concentrated in the upper part of the model.
Characterization and Measurements from the Infrared Grazing Angle Reflectometer
2012-06-14
18 3. List of sample scatter pattern fitting values. All values were taken from Ngan’s paper ”Experimental Analysis of BRDF Models - Supplemental” [1...using a BRDF model , and the absorptance can be modeled using a Fresnel absorptance. After defining both of these values, we can calculate the power seen... BRDF model of the face of the detector. This paper will examine the case of a flat detector with some index of refraction n. This air-detector
Calculating the nutrient composition of recipes with computers.
Powers, P M; Hoover, L W
1989-02-01
The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.
Evaluation of confidence intervals for a steady-state leaky aquifer model
Christensen, S.; Cooley, R.L.
1999-01-01
The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
Temperature, thermal stresses, and residual creep stresses were studied by comparing laboratory values measured on a built-up titanium structure with values calculated from finite-element models. Several such models were used to examine the relationship between computational thermal stresses and thermal stresses measured on a built-up structure. Element suitability, element density, and computational temperature discrepancies were studied to determine their impact on measured and calculated thermal stress. The optimum number of elements is established from a balance between element density and suitable safety margins, such that the answer is acceptably safe yet is economical from a computational viewpoint. It is noted that situations exist where relatively small excursions of calculated temperatures from measured values result in far more than proportional increases in thermal stress values. Measured residual stresses due to creep significantly exceeded the values computed by the piecewise linear elastic strain analogy approach. The most important element in the computation is the correct definition of the creep law. Computational methodology advances in predicting residual stresses due to creep require significantly more viscoelastic material characterization.
NASA Astrophysics Data System (ADS)
Loeffler, U.; Weible, H.
1981-08-01
The final energy demand for the Federal Republic of Germany was calculated. The model MEDEE-2 describes, in relationship to a given distribution of the production of single industrial sectors, of energy specific values and of population development, the final energy consumption of the domestic, service industry and transportation sectors for a given region. The input data, consisting of constants and variables, and the proceeding, by which the projections for the input data of single sectors are performed, are discussed. The results of the calculations are presented and are compared. The sensitivity of single results in relation to the variation of input values is analyzed.
NASA Astrophysics Data System (ADS)
Li, Dongna; Li, Xudong; Dai, Jianfeng
2018-06-01
In this paper, two kinds of transient models, the viscoelastic model and the linear elastic model, are established to analyze the curing deformation of the thermosetting resin composites, and are calculated by COMSOL Multiphysics software. The two models consider the complicated coupling between physical and chemical changes during curing process of the composites and the time-variant characteristic of material performance parameters. Subsequently, the two proposed models are implemented respectively in a three-dimensional composite laminate structure, and a simple and convenient method of local coordinate system is used to calculate the development of residual stresses, curing shrinkage and curing deformation for the composite laminate. Researches show that the temperature, degree of curing (DOC) and residual stresses during curing process are consistent with the study in literature, so the curing shrinkage and curing deformation obtained on these basis have a certain referential value. Compared the differences between the two numerical results, it indicates that the residual stress and deformation calculated by the viscoelastic model are more close to the reference value than the linear elastic model.
Elaboration of the α-model derived from the BCS theory of superconductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, David C.
2013-10-14
The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp. Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp.more » Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is Boltzmann's constant and Tc is the superconducting transition temperature. On the other hand, to calculate the electronic free energy, entropy, heat capacity and thermodynamic critical field versus T, the α-model takes α to be an adjustable parameter. Here we write the BCS equations and limiting behaviors for the superconducting state thermodynamic properties explicitly in terms of α, as needed for calculations within the α-model, and present plots of the results versus T and α that are compared with the respective BCS predictions. Mechanisms such as gap anisotropy and strong coupling that can cause deviations of the thermodynamics from the BCS predictions, especially the heat capacity jump at Tc, are considered. Extensions of the α-model that have appeared in the literature, such as the two-band model, are also discussed. Tables of values of Δ(T)/Δ(0), the normalized London parameter Λ(T)/Λ(0) and λL(T)/λL(0) calculated from the BCS theory using α = αBCS are provided, which are the same in the α-model by assumption. Tables of values of the entropy, heat capacity and thermodynamic critical field versus T for seven values of α, including αBCS, are also presented.« less
Model for screened, charge-regulated electrostatics of an eye lens protein: Bovine gammaB-crystallin
Wahle, Christopher W.; Martini, K. Michael; Hollenbeck, Dawn M.; Langner, Andreas; Ross, David S.; Hamilton, John F.; Thurston, George M.
2018-01-01
We model screened, site-specific charge regulation of the eye lens protein bovine gammaB-crystallin (γ B) and study the probability distributions of its proton occupancy patterns. Using a simplified dielectric model, we solve the linearized Poisson-Boltzmann equation to calculate a 54 × 54 work-of-charging matrix, each entry being the modeled voltage at a given titratable site, due to an elementary charge at another site. The matrix quantifies interactions within patches of sites, including γB charge pairs. We model intrinsic pK values that would occur hypothetically in the absence of other charges, with use of experimental data on the dependence of pK values on aqueous solution conditions, the dielectric model, and literature values. We use Monte Carlo simulations to calculate a model grand-canonical partition function that incorporates both the work-of-charging and the intrinsic pK values for isolated γB molecules and we calculate the probabilities of leading proton occupancy configurations, for 4 < pH < 8 and Debye screening lengths from 6 to 20 Å. We select the interior dielectric value to model γB titration data. At pH 7.1 and Debye length 6.0 Å, on a given γB molecule the predicted top occupancy pattern is present nearly 20% of the time, and 90% of the time one or another of the first 100 patterns will be present. Many of these occupancy patterns differ in net charge sign as well as in surface voltage profile. We illustrate how charge pattern probabilities deviate from the multinomial distribution that would result from use of effective pK values alone and estimate the extents to which γB charge pattern distributions broaden at lower pH and narrow as ionic strength is lowered. These results suggest that for accurate modeling of orientation-dependent γB-γB interactions, consideration of numerous pairs of proton occupancy patterns will be needed. PMID:29346981
Wahle, Christopher W; Martini, K Michael; Hollenbeck, Dawn M; Langner, Andreas; Ross, David S; Hamilton, John F; Thurston, George M
2017-09-01
We model screened, site-specific charge regulation of the eye lens protein bovine gammaB-crystallin (γB) and study the probability distributions of its proton occupancy patterns. Using a simplified dielectric model, we solve the linearized Poisson-Boltzmann equation to calculate a 54×54 work-of-charging matrix, each entry being the modeled voltage at a given titratable site, due to an elementary charge at another site. The matrix quantifies interactions within patches of sites, including γB charge pairs. We model intrinsic pK values that would occur hypothetically in the absence of other charges, with use of experimental data on the dependence of pK values on aqueous solution conditions, the dielectric model, and literature values. We use Monte Carlo simulations to calculate a model grand-canonical partition function that incorporates both the work-of-charging and the intrinsic pK values for isolated γB molecules and we calculate the probabilities of leading proton occupancy configurations, for 4
Model for screened, charge-regulated electrostatics of an eye lens protein: Bovine gammaB-crystallin
NASA Astrophysics Data System (ADS)
Wahle, Christopher W.; Martini, K. Michael; Hollenbeck, Dawn M.; Langner, Andreas; Ross, David S.; Hamilton, John F.; Thurston, George M.
2017-09-01
We model screened, site-specific charge regulation of the eye lens protein bovine gammaB-crystallin (γ B ) and study the probability distributions of its proton occupancy patterns. Using a simplified dielectric model, we solve the linearized Poisson-Boltzmann equation to calculate a 54 ×54 work-of-charging matrix, each entry being the modeled voltage at a given titratable site, due to an elementary charge at another site. The matrix quantifies interactions within patches of sites, including γ B charge pairs. We model intrinsic p K values that would occur hypothetically in the absence of other charges, with use of experimental data on the dependence of p K values on aqueous solution conditions, the dielectric model, and literature values. We use Monte Carlo simulations to calculate a model grand-canonical partition function that incorporates both the work-of-charging and the intrinsic p K values for isolated γ B molecules and we calculate the probabilities of leading proton occupancy configurations, for 4
Radiation risk predictions for Space Station Freedom orbits
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Atwell, William; Weyland, Mark; Hardy, Alva C.; Wilson, John W.; Townsend, Lawrence W.; Shinn, Judy L.; Katz, Robert
1991-01-01
Risk assessment calculations are presented for the preliminary proposed solar minimum and solar maximum orbits for Space Station Freedom (SSF). Integral linear energy transfer (LET) fluence spectra are calculated for the trapped proton and GCR environments. Organ dose calculations are discussed using the computerized anatomical man model. The cellular track model of Katz is applied to calculate cell survival, transformation, and mutation rates for various aluminum shields. Comparisons between relative biological effectiveness (RBE) and quality factor (QF) values for SSF orbits are made.
Validation of a dynamic linked segment model to calculate joint moments in lifting.
de Looze, M P; Kingma, I; Bussmann, J B; Toussaint, H M
1992-08-01
A two-dimensional dynamic linked segment model was constructed and applied to a lifting activity. Reactive forces and moments were calculated by an instantaneous approach involving the application of Newtonian mechanics to individual adjacent rigid segments in succession. The analysis started once at the feet and once at a hands/load segment. The model was validated by comparing predicted external forces and moments at the feet or at a hands/load segment to actual values, which were simultaneously measured (ground reaction force at the feet) or assumed to be zero (external moments at feet and hands/load and external forces, beside gravitation, at hands/load). In addition, results of both procedures, in terms of joint moments, including the moment at the intervertebral disc between the fifth lumbar and first sacral vertebra (L5-S1), were compared. A correlation of r = 0.88 between calculated and measured vertical ground reaction forces was found. The calculated external forces and moments at the hands showed only minor deviations from the expected zero level. The moments at L5-S1, calculated starting from feet compared to starting from hands/load, yielded a coefficient of correlation of r = 0.99. However, moments calculated from hands/load were 3.6% (averaged values) and 10.9% (peak values) higher. This difference is assumed to be due mainly to erroneous estimations of the positions of centres of gravity and joint rotation centres. The estimation of the location of L5-S1 rotation axis can affect the results significantly. Despite the numerous studies estimating the load on the low back during lifting on the basis of linked segment models, only a few attempts to validate these models have been made. This study is concerned with the validity of the presented linked segment model. The results support the model's validity. Effects of several sources of error threatening the validity are discussed. Copyright © 1992. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc
2017-02-01
Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.
Second derivative in the model of classical binary system
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2016-06-01
We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acar, Hilal; Chiu-Tsao, Sou-Tung; Oezbay, Ismail
Purpose: (1) To measure absolute dose distributions in eye phantom for COMS eye plaques with {sup 125}I seeds (model I25.S16) using radiochromic EBT film dosimetry. (2) To determine the dose correction function for calculations involving the TG-43 formalism to account for the presence of the COMS eye plaque using Monte Carlo (MC) method specific to this seed model. (3) To test the heterogeneous dose calculation accuracy of the new version of Plaque Simulator (v5.3.9) against the EBT film data for this seed model. Methods: Using EBT film, absolute doses were measured for {sup 125}I seeds (model I25.S16) in COMS eyemore » plaques (1) along the plaque's central axis for (a) uniformly loaded plaques (14-20 mm in diameter) and (b) a 20 mm plaque with single seed, and (2) in off-axis direction at depths of 5 and 12 mm for all four plaque sizes. The EBT film calibration was performed at {sup 125}I photon energy. MC calculations using MCNP5 code for a single seed at the center of a 20 mm plaque in homogeneous water and polystyrene medium were performed. The heterogeneity dose correction function was determined from the MC calculations. These function values at various depths were entered into PS software (v5.3.9) to calculate the heterogeneous dose distributions for the uniformly loaded plaques (of all four sizes). The dose distributions with homogeneous water assumptions were also calculated using PS for comparison. The EBT film measured absolute dose rate values (film) were compared with those calculated using PS with homogeneous assumption (PS Homo) and heterogeneity correction (PS Hetero). The values of dose ratio (film/PS Homo) and (film/PS Hetero) were obtained. Results: The central axis depth dose rate values for a single seed in 20 mm plaque measured using EBT film and calculated with MCNP5 code (both in ploystyrene phantom) were compared, and agreement within 9% was found. The dose ratio (film/PS Homo) values were substantially lower than unity (mostly between 0.8 and 0.9) for all four plaque sizes, indicating dose reduction by COMS plaque compared with homogeneous assumption. The dose ratio (film/PS Hetero) values were close to unity, indicating the PS Hetero calculations agree with those from the film study. Conclusions: Substantial heterogeneity effect on the {sup 125}I dose distributions in an eye phantom for COMS plaques was verified using radiochromic EBT film dosimetry. The calculated doses for uniformly loaded plaques using PS with heterogeneity correction option enabled were corroborated by the EBT film measurement data. Radiochromic EBT film dosimetry is feasible in measuring absolute dose distributions in eye phantom for COMS eye plaques loaded with single or multiple {sup 125}I seeds. Plaque Simulator is a viable tool for the calculation of dose distributions if one understands its limitations and uses the proper heterogeneity correction feature.« less
Ramirez-Sandoval, Juan C; Castilla-Peón, Maria F; Gotés-Palazuelos, José; Vázquez-García, Juan C; Wagner, Michael P; Merelo-Arias, Carlos A; Vega-Vega, Olynka; Rincón-Pedrero, Rodolfo; Correa-Rotter, Ricardo
2016-06-01
Ramirez-Sandoval, Juan C., Maria F. Castilla-Peón, José Gotés-Palazuelos, Juan C. Vázquez-García, Michael P. Wagner, Carlos A. Merelo-Arias, Olynka Vega-Vega, Rodolfo Rincón-Pedrero, and Ricardo Correa-Rotter. Bicarbonate values for healthy residents living in cities above 1500 m of altitude: a theoretical model and systematic review. High Alt Med Biol. 17:85-92, 2016.-Plasma bicarbonate (HCO3(-)) concentration is the main value used to assess the metabolic component of the acid-base status. There is limited information regarding plasma HCO3(-) values adjusted for altitude for people living in cities at high altitude defined as 1500 m (4921 ft) or more above sea level. Our aim was to estimate the plasma HCO3(-) concentration in residents of cities at these altitudes using a theoretical model and compare these values with HCO3(-) values found on a systematic review, and with those venous CO2 values obtained in a sample of 633 healthy individuals living at an altitude of 2240 m (7350 ft). We calculated the PCO2 using linear regression models and calculated plasma HCO3(-) according to the Henderson-Hasselbalch equation. Results show that HCO3(-) concentration falls as the altitude of the cities increase. For each 1000 m of altitude above sea level, HCO3(-) decreases to 0.55 and 1.5 mEq/L in subjects living at sea level with acute exposure to altitude and in subjects acclimatized to altitude, respectively. Estimated HCO3(-) values from the theoretical model were not different to HCO3(-) values found in publications of a systematic review or with venous total CO2 measurements in our sample. Altitude has to be taken into consideration in the calculation of HCO3(-) concentrations in cities above 1500 m to avoid an overdiagnosis of acid-base disorders in a given individual.
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2013 CFR
2013-07-01
... intended for sale at high altitude, the Administrator may use fuel economy data from tests conducted on... from the tests performed using gasoline or diesel test fuel. (ii) If 5-cycle testing was performed on the alcohol or natural gas test fuel, calculate the city and highway fuel economy values from the...
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
Code of Federal Regulations, 2012 CFR
2012-07-01
... intended for sale at high altitude, the Administrator may use fuel economy data from tests conducted on... from the tests performed using gasoline or diesel test fuel. (ii) If 5-cycle testing was performed on the alcohol or natural gas test fuel, calculate the city and highway fuel economy values from the...
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Superposition model analysis of the magnetocrystalline anisotropy of Ba-ferrite
NASA Astrophysics Data System (ADS)
Novák, Pavel
1994-06-01
Theoretical analysis of the first magnetocrystalline anisotropy constantK 1 of BaFe12O19 is performed. Two contributions toK 1 are considered — single ion anisotropy and dipolar anisotropy. ParameterD which determines the magnitude of the single ion contribution is calculated on the basis of the superposition model. It is argued that the disagreement between calculated and observed values ofK 1 is most likely connected with the contribution of Fe3+ ions on bipyramidal sites, for which the value ofD is uncertain.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Identification of the numerical model of FEM in reference to measurements in situ
NASA Astrophysics Data System (ADS)
Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa
2018-01-01
The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).
Bushland Reference ET Calculator with QA/QC capabilities and iPhone/iPad application
USDA-ARS?s Scientific Manuscript database
Accurate daily reference evapotranspiration (ET) values are needed to estimate crop water demand for irrigation management and hydrologic modeling purposes. The USDA-ARS Conservation and Production Research Laboratory at Bushland, Texas developed the Bushland Reference ET (BET) Calculator for calcul...
Improving deep convolutional neural networks with mixed maxout units.
Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
NASA Astrophysics Data System (ADS)
Schauberger, G.; Piringer, M.; Petz, E.
The indoor climate of livestock buildings is of importance for the well-being and health of animals and their production performance (daily weight gain, milk yield etc). By using a steady-state model for the sensible and latent heat fluxes and the CO2 and odour mass flows, the indoor climate of mechanically ventilated livestock buildings can be calculated. These equations depend on the livestock (number of animals and how they are kept), the insulation of the building and the characteristics of the ventilation system (ventilation rate). Since the model can only be applied to animal houses where the ventilation systems are mechanically controlled (this is the case for a majority of finishing pig units), the calculations were done for an example of a finishing pig unit with 1000 animal places. The model presented used 30 min values of the outdoor parameters temperature and humidity, collected over a 2-year period, as input. The projected environment inside the livestock building was compared with recommended values. The duration of condensation on the inside surfaces was also calculated.
Skrzyński, Witold
2014-11-01
The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Calculation of Expectation Values of Operators in the Complex Scaling Method
Papadimitriou, G.
2016-06-14
In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less
An ecological compensation standard based on emergy theory for the Xiao Honghe River Basin.
Guan, Xinjian; Chen, Moyu; Hu, Caihong
2015-01-01
The calculation of an ecological compensation standard is an important, but also difficult aspect of current ecological compensation research. In this paper, the factors affecting the ecological-economic system in the Xiao Honghe River Basin, China, including the flow of energy, materials, and money, were calculated using the emergy analysis method. A consideration of the relationships between the ecological-economic value of water resources and ecological compensation allowed the ecological-economic value to be calculated. On this basis, the amount of water needed for dilution was used to develop a calculation model for the ecological compensation standard of the basin. Using the Xiao Honghe River Basin as an example, the value of water resources and the ecological compensation standard were calculated using this model according to the emission levels of the main pollutant in the basin, chemical oxygen demand. The compensation standards calculated for the research areas in Xipin, Shangcai, Pingyu, and Xincai were 34.91 yuan/m3, 32.97 yuan/m3, 35.99 yuan/m3, and 34.70 yuan/m3, respectively, and such research output would help to generate and support new approaches to the long-term ecological protection of the basin and improvement of the ecological compensation system.
Classification of customer lifetime value models using Markov chain
NASA Astrophysics Data System (ADS)
Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi
2017-10-01
A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taborda, A; Benabdallah, N; Desbree, A
2015-06-15
Purpose: To perform a dosimetry study at the sub-cellular scale of Auger-electron emitter 99m-Tc using a mouse single thyroid cellular model to investigate the contribution of the 99m-Tc Auger-electrons to the absorbed dose and possible link to the thyroid stunning in in vivo experiments in mice, recently reported in literature. Methods: The simulation of S-values for Auger-electron emitting radionuclides was performed using both the recent MCNP6 software and the Geant4-DNA extension of the Geant4 toolkit. The dosimetric calculations were validated through comparison with results from literature, using a simple model of a single cell consisting of two concentric spheres ofmore » unit density water and for six Auger-electron emitting radionuclides. Furthermore, the S-values were calculated using a single thyroid follicle model for uniformly distributed 123-I and 125-I radionuclides and compared with published S-values. After validation, the simulation of the S-values was performed for the 99m-Tc radionuclide within the several mouse thyroid follicle cellular compartments, considering the radiative and non-radiative transitions of the 99m-Tc radiation spectrum. Results: The calculated S-values using MCNP6 are in good agreement with the results from literature, validating its use for the 99m-Tc S-values calculations. The most significant absorbed dose corresponds to the case where the radionuclide is uniformly distributed in the follicular cell’s nucleus, with a S-value of 7.8 mGy/disintegration, due mainly to the absorbed Auger-electrons. The results show that, at a sub-cellular scale, the emitted X-rays and gamma particles do not contribute significantly to the absorbed dose. Conclusion: In this work, MCNP6 was validated for dosimetric studies at the sub-cellular scale. It was shown that the contribution of the Auger-electrons to the absorbed dose is important at this scale compared to the emitted photons’ contribution and can’t be neglected. The obtained S-values of Auger-electron emitting 99m-Tc radionuclide will be presented and discussed.« less
SAR in a child voxel phantom from exposure to wireless computer networks (Wi-Fi).
Findlay, R P; Dimbylow, P J
2010-08-07
Specific energy absorption rate (SAR) values have been calculated in a 10 year old sitting voxel model from exposure to electromagnetic fields at 2.4 and 5 GHz, frequencies commonly used by Wi-Fi devices. Both plane-wave exposure of the model and irradiation from antennas in the near field were investigated for a variety of exposure conditions. In all situations studied, the SAR values calculated were considerably below basic restrictions. For a typical Wi-Fi exposure scenario using an inverted F antenna operating at 100 mW, a duty factor of 0.1 and an antenna-body separation of 34 cm, the maximum peak localized SAR was found to be 3.99 mW kg(-1) in the torso region. At 2.4 GHz, using a power of 100 mW and a duty factor of 1, the highest localized SAR value in the head was calculated as 5.7 mW kg(-1). This represents less than 1% of the SAR previously calculated in the head for a typical mobile phone exposure condition.
Average value of the shape and direction factor in the equation of refractive index
NASA Astrophysics Data System (ADS)
Zhang, Tao
2017-10-01
The theoretical calculation of the refractive indices is of great significance for the developments of new optical materials. The calculation method of refractive index, which was deduced from the electron-cloud-conductor model, contains the shape and direction factor 〈g〉. 〈g〉 affects the electromagnetic-induction energy absorbed by the electron clouds, thereby influencing the refractive indices. It is not yet known how to calculate 〈g〉 value of non-spherical electron clouds. In this paper, 〈g〉 value is derived by imaginatively dividing the electron cloud into numerous little volume elements and then regrouping them. This paper proves that 〈g〉 = 2/3 when molecules’ spatial orientations distribute randomly. The calculations of the refractive indices of several substances validate this equation. This result will help to promote the application of the calculation method of refractive index.
Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow
NASA Astrophysics Data System (ADS)
Kemerink, G. J.; Pleiter, F.
1986-08-01
The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications
Bäck, Anna
2013-01-01
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865
NASA Astrophysics Data System (ADS)
Plegnière, Sabrina; Casper, Markus; Hecker, Benjamin; Müller-Fürstenberger, Georg
2014-05-01
The basis of many models to calculate and assess climate change and its consequences are annual means of temperature and precipitation. This method leads to many uncertainties especially at the regional or local level: the results are not realistic or too coarse. Particularly in agriculture, single events and the distribution of precipitation and temperature during the growing season have enormous influences on plant growth. Therefore, the temporal distribution of climate variables should not be ignored. To reach this goal, a high-resolution ecological-economic model was developed which combines a complex plant growth model (STICS) and an economic model. In this context, input data of the plant growth model are daily climate values for a specific climate station calculated by the statistical climate model (WETTREG). The economic model is deduced from the results of the plant growth model STICS. The chosen plant is corn because corn is often cultivated and used in many different ways. First of all, a sensitivity analysis showed that the plant growth model STICS is suitable to calculate the influences of different cultivation methods and climate on plant growth or yield as well as on soil fertility, e.g. by nitrate leaching, in a realistic way. Additional simulations helped to assess a production function that is the key element of the economic model. Thereby the problems when using mean values of temperature and precipitation in order to compute a production function by linear regression are pointed out. Several examples show why a linear regression to assess a production function based on mean climate values or smoothed natural distribution leads to imperfect results and why it is not possible to deduce a unique climate factor in the production function. One solution for this problem is the additional consideration of stress indices that show the impairment of plants by water or nitrate shortage. Thus, the resulting model takes into account not only the ecological factors (e.g. the plant growth) or the economical factors as a simple monetary calculation, but also their mutual influences. Finally, the ecological-economic model enables us to make a risk assessment or evaluate adaptation strategies.
Guo, Xiaoya; Zhu, Jian; Maehara, Akiko; Monoly, David; Samady, Habib; Wang, Liang; Billiar, Kristen L.; Zheng, Jie; Yang, Chun; Mintz, Gary S.; Giddens, Don P.; Tang, Dalin
2016-01-01
Computational models have been used to calculate plaque stress and strain for plaque progression and rupture investigations. An intravascular ultrasound (IVUS)-based modeling approach is proposed to quantify in vivo vessel material properties for more accurate stress/strain calculations. In vivo Cine IVUS and VH-IVUS coronary plaque data were acquired from one patient with informed consent obtained. Cine IVUS data and 3D thin-slice models with axial stretch were used to determine patient-specific vessel material properties. Twenty full 3D fluid–structure interaction models with ex vivo and in vivo material properties and various axial and circumferential shrink combinations were constructed to investigate the material stiffness impact on stress/strain calculations. The approximate circumferential Young’s modulus over stretch ratio interval [1.0, 1.1] for an ex vivo human plaque sample and two slices (S6 and S18) from our IVUS data were 1631, 641, and 346 kPa, respectively. Average lumen stress/strain values from models using ex vivo, S6 and S18 materials with 5 % axial shrink and proper circumferential shrink were 72.76, 81.37, 101.84 kPa and 0.0668, 0.1046, and 0.1489, respectively. The average cap strain values from S18 material models were 150–180 % higher than those from the ex vivo material models. The corresponding percentages for the average cap stress values were 50–75 %. Dropping axial and circumferential shrink consideration led to stress and strain over-estimations. In vivo vessel material properties may be considerably softer than those from ex vivo data. Material stiffness variations may cause 50–75 % stress and 150–180 % strain variations. PMID:27561649
Bartolino, James R.
2007-01-01
A numerical flow model of the Spokane Valley-Rathdrum Prairie aquifer currently (2007) being developed requires the input of values for areally-distributed recharge, a parameter that is often the most uncertain component of water budgets and ground-water flow models because it is virtually impossible to measure over large areas. Data from six active weather stations in and near the study area were used in four recharge-calculation techniques or approaches; the Langbein method, in which recharge is estimated on the basis of empirical data from other basins; a method developed by the U.S. Department of Agriculture (USDA), in which crop consumptive use and effective precipitation are first calculated and then subtracted from actual precipitation to yield an estimate of recharge; an approach developed as part of the Eastern Snake Plain Aquifer Model (ESPAM) Enhancement Project in which recharge is calculated on the basis of precipitation-recharge relations from other basins; and an approach in which reference evapotranspiration is calculated by the Food and Agriculture Organization (FAO) Penman-Monteith equation, crop consumptive use is determined (using a single or dual coefficient approach), and recharge is calculated. Annual recharge calculated by the Langbein method for the six weather stations was 4 percent of annual mean precipitation, yielding the lowest values of the methods discussed in this report, however, the Langbein method can be only applied to annual time periods. Mean monthly recharge calculated by the USDA method ranged from 53 to 73 percent of mean monthly precipitation. Mean annual recharge ranged from 64 to 69 percent of mean annual precipitation. Separate mean monthly recharge calculations were made with the ESPAM method using initial input parameters to represent thin-soil, thick-soil, and lava-rock conditions. The lava-rock parameters yielded the highest recharge values and the thick-soil parameters the lowest. For thin-soil parameters, calculated monthly recharge ranged from 10 to 29 percent of mean monthly precipitation and annual recharge ranged from 16 to 23 percent of mean annual precipitation. For thick-soil parameters, calculated monthly recharge ranged from 1 to 5 percent of mean monthly precipitation and mean annual recharge ranged from 2 to 4 percent of mean annual precipitation. For lava-rock parameters, calculated mean monthly recharge ranged from 37 to 57 percent of mean monthly precipitation and mean annual recharge ranged from 45 to 52 percent of mean annual precipitation. Single-coefficient (crop coefficient) FAO Penman-Monteith mean monthly recharge values were calculated for Spokane Weather Service Office (WSO) Airport, the only station for which the necessary meteorological data were available. Grass-referenced values of mean monthly recharge ranged from 0 to 81 percent of mean monthly precipitation and mean annual recharge was 21 percent of mean annual precipitation; alfalfa-referenced values of mean monthly recharge ranged from 0 to 85 percent of mean monthly precipitation and mean annual recharge was 24 percent of mean annual precipitation. Single-coefficient FAO Penman-Monteith calculations yielded a mean monthly recharge of zero during the eight warmest and driest months of the year (March-October). In order to refine the mean monthly recharge estimates, dual-coefficient (basal crop and soil evaporation coefficients) FAO Penman-Monteith dual-crop evapotranspiration and deep-percolation calculations were applied to daily values from the Spokane WSO Airport for January 1990 through December 2005. The resultant monthly totals display a temporal variability that is absent from the mean monthly values and demonstrate that the daily amount and timing of precipitation dramatically affect calculated recharge. The dual-coefficient FAO Penman-Monteith calculations were made for the remaining five stations using wind-speed values for Spokane WSO Airport and other assumptions regarding
NASA Astrophysics Data System (ADS)
Wei, L.; Marshall, J. D.
2007-12-01
3PG (Physiological Principles in Predicting Growth), a process-based physiological model of forest productivity, has been widely used and well validated. Based on 3PG, a 3PG-δ13C model to simulate δ13C content in plant tissue is built in this research. 3PG calculates carbon assimilation from utilizable absorbed photosynthetically active radiation (PAR), and calculates stomatal conductance from maximum canopy conductance multiplied by physiological modifier which includes the effect of water vapor deficit and soil water. Then the equation of Farquhar and Sharkey (1982) was used to calculate δ13C content in plant. Five even-aged coniferous forest stands located near Clarkia, Idaho (47°15'N, 115°25'W) in Mica Creek Experimental Watershed, were chosen to test the model, (2 stands had been partial cut (50% canopy removal in 1990) and 3 were uncut). MCEW has been extensively investigated since 1990 and many necessary parameters needed for 3PG are readily available. Each of these sites is located near a UI Meteorological station, which recorded half-hourly climatic data since 2003. These site-specific climatic data were extend to 1991 by correlating with data from a nearby SNOTEL station (SNOwpack TELemetry, NRCS, 47°9' N, 116°16' W). Forest mensuration data were obtained form each stand using variable radius plots (VRP). Three tree species, which consist more than 95% of all trees, were parameterized for 3PG model, including: grand fir (Abies grandis Donn ex D. Don), western red cedar (Thuja plicat Donn ex D. Don a) and Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco). Because 4 out of 5 stands have mixed species, we also used parameters for mixed stands to run the model. To stabilize, the model was initially run under average climatic data for 20 years, and then run under the actual climatic data from 1991 to 2006. As 3PG runs in a monthly time step, monthly δ13C values were calculated first, and then yearly values were calculated by weighted averages. For testing the model, tree cores were collected from each stand and species. Ring-widths of tree cores were measured and cross-dated with a ring-width chronology obtained from MCEW. δ13C contents of tree- ring samples from known year were tested. Preliminary results indicate 3PG-δ13C simulated values are consistent with observed values in tree-rings. δ13C values of modeled species are different: western red cider has the highest delta13C values among the three species and western larch has the lowest.
2014-07-01
different value and pressing Enter. The PRC- Calc session can be saved for future use with these new values using the Save Session button in the upper...describe (a) how the PRC Correction Calculator (PRC- Calc ) uses the model of Fernandez et al. (2009), (b) how well its performance compares against...experimental data, (c) how the user may prepare their computer with software to use the PRC calculator, and then (d) how to use PRC- Calc to process PRC
NASA Astrophysics Data System (ADS)
Fu, Wei-Jie; Liu, Yu-Xin; Wu, Yue-Liang
2010-01-01
We study fluctuations of conserved charges including baryon number, electric charge, and strangeness as well as the correlations among these conserved charges in the 2+1 flavor Polyakov-Nambu-Jona-Lasinio model at finite temperature. The calculated results are compared with those obtained from recent lattice calculations performed with an improved staggered fermion action at two values of the lattice cutoff with almost physical up and down quark masses and a physical value for the strange quark mass. We find that our calculated results are well consistent with those obtained in lattice calculations except for some quantitative differences for fluctuations related with strange quarks. Our calculations indicate that there is a pronounced cusp in the ratio of the quartic to quadratic fluctuations of baryon number, i.e. χ4B/χ2B, at the critical temperature during the phase transition, which confirms that χ4B/χ2B is a useful probe of the deconfinement and chiral phase transition.
NASA Astrophysics Data System (ADS)
Kim, Chan Hyeong; Hyoun Choi, Sang; Jeong, Jong Hwi; Lee, Choonsik; Chung, Min Suk
2008-08-01
A Korean voxel model, named 'High-Definition Reference Korean-Man (HDRK-Man)', was constructed using high-resolution color photographic images that were obtained by serially sectioning the cadaver of a 33-year-old Korean adult male. The body height and weight, the skeletal mass and the dimensions of the individual organs and tissues were adjusted to the reference Korean data. The resulting model was then implemented into a Monte Carlo particle transport code, MCNPX, to calculate the dose conversion coefficients for the internal organs and tissues. The calculated values, overall, were reasonable in comparison with the values from other adult voxel models. HDRK-Man showed higher dose conversion coefficients than other models, due to the facts that HDRK-Man has a smaller torso and that the arms of HDRK-Man are shifted backward. The developed model is believed to adequately represent average Korean radiation workers and thus can be used for more accurate calculation of dose conversion coefficients for Korean radiation workers in the future.
Mutual information and the fidelity of response of gene regulatory models
NASA Astrophysics Data System (ADS)
Tabbaa, Omar P.; Jayaprakash, C.
2014-08-01
We investigate cellular response to extracellular signals by using information theory techniques motivated by recent experiments. We present results for the steady state of the following gene regulatory models found in both prokaryotic and eukaryotic cells: a linear transcription-translation model and a positive or negative auto-regulatory model. We calculate both the information capacity and the mutual information exactly for simple models and approximately for the full model. We find that (1) small changes in mutual information can lead to potentially important changes in cellular response and (2) there are diminishing returns in the fidelity of response as the mutual information increases. We calculate the information capacity using Gillespie simulations of a model for the TNF-α-NF-κ B network and find good agreement with the measured value for an experimental realization of this network. Our results provide a quantitative understanding of the differences in cellular response when comparing experimentally measured mutual information values of different gene regulatory models. Our calculations demonstrate that Gillespie simulations can be used to compute the mutual information of more complex gene regulatory models, providing a potentially useful tool in synthetic biology.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
DFT and AIM study of the protonation of nitrous acid and the pKa of nitrous acidium ion.
Crugeiras, Juan; Ríos, Ana; Maskill, Howard
2011-11-10
The gas phase and aqueous thermochemistry, NMR chemical shifts, and the topology of chemical bonding of nitrous acid (HONO) and nitrous acidium ion (H(2)ONO(+)) have been investigated by ab initio methods using density functional theory. By the same methods, the dissociation of H(2)ONO(+) to give the nitrosonium ion (NO(+)) and water has also been investigated. We have used Becke's hybrid functional (B3LYP), and geometry optimizations were performed with the 6-311++G(d,p) basis set. In addition, highly accurate ab initio composite methods (G3 and CBS-Q) were used. Solvation energies were calculated using the conductor-like polarizable continuum model, CPCM, at the B3LYP/6-311++G(d,p) level of theory, with the UAKS cavity model. The pK(a) value of H(2)ONO(+) was calculated using two different schemes: the direct method and the proton exchange method. The calculated pK(a) values at different levels of theory range from -9.4 to -15.6, showing that H(2)ONO(+) is a strong acid (i.e., HONO is only a weak base). The equilibrium constant, K(R), for protonation of nitrous acid followed by dissociation to give NO(+) and H(2)O has also been calculated using the same methodologies. The pK(R) value calculated by the G3 and CBS-QB3 methods is in best (and satisfactory) agreement with experimental results, which allows us to narrow down the likely value of the pK(a) of H(2)ONO(+) to about -10, a value appreciably more acidic than literature values.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nearest 0.1 mpg; or (iii) For natural gas-fueled model types, the fuel economy value calculated for that... as determined in § 600.208-12(b)(5)(i). (vi) For natural gas dual fuel model types, for model years... natural gas as determined in § 600.208-12(b)(5)(ii) divided by 0.15 provided the requirements of paragraph...
Code of Federal Regulations, 2014 CFR
2014-07-01
... nearest 0.1 mpg; or (iii) For natural gas-fueled model types, the fuel economy value calculated for that... as determined in § 600.208-12(b)(5)(i). (vi) For natural gas dual fuel model types, for model years... natural gas as determined in § 600.208-12(b)(5)(ii) divided by 0.15 provided the requirements of paragraph...
Estimates of atmospheric O2 in the Paleoproterozoic from paleosols
NASA Astrophysics Data System (ADS)
Kanzaki, Yoshiki; Murakami, Takashi
2016-02-01
A weathering model was developed to constrain the partial pressure of atmospheric O2 (PO2) in the Paleoproterozoic from the Fe records in paleosols. The model describes the Fe behavior in a weathering profile by dissolution/precipitation of Fe-bearing minerals, oxidation of dissolved Fe(II) to Fe(III) by oxygen and transport of dissolved Fe by water flow, in steady state. The model calculates the ratio of the precipitated Fe(III)-(oxyhydr)oxides from the dissolved Fe(II) to the dissolved Fe(II) during weathering (ϕ), as a function of PO2 . An advanced kinetic expression for Fe(II) oxidation by O2 was introduced into the model from the literature to calculate accurate ϕ-PO2 relationships. The model's validity is supported by the consistency of the calculated ϕ-PO2 relationships with those in the literature. The model can calculate PO2 for a given paleosol, once a ϕ value and values of the other parameters relevant to weathering, namely, pH of porewater, partial pressure of carbon dioxide (PCO2), water flow, temperature and O2 diffusion into soil, are obtained for the paleosol. The above weathering-relevant parameters were scrutinized for individual Paleoproterozoic paleosols. The values of ϕ, temperature, pH and PCO2 were obtained from the literature on the Paleoproterozoic paleosols. The parameter value of water flow was constrained for each paleosol from the mass balance of Si between water and rock phases and the relationships between water saturation ratio and hydraulic conductivity. The parameter value of O2 diffusion into soil was calculated for each paleosol based on the equation for soil O2 concentration with the O2 transport parameters in the literature. Then, we conducted comprehensive PO2 calculations for individual Paleoproterozoic paleosols which reflect all uncertainties in the weathering-relevant parameters. Consequently, robust estimates of PO2 in the Paleoproterozoic were obtained: 10-7.1-10-5.4 atm at ∼2.46 Ga, 10-5.0-10-2.5 atm at ∼2.15 Ga, 10-5.2-10-1.7 atm at ∼2.08 Ga and more than 10-4.6-10-2.0 atm at ∼1.85 Ga. Comparison of the present PO2 estimates to those in the literature suggests that a drastic rise of oxygen would not have occurred at ∼2.4 Ga, supporting a slightly rapid rise of oxygen at ∼2.4 Ga and a gradual rise of oxygen in the Paleoproterozoic in long term.
NASA Astrophysics Data System (ADS)
Romanenko, Yu. E.; Merkin, A. A.; Komarov, A. A.; Lefedova, O. V.
2014-08-01
The kinetics of the hydrogenation of intermediates in the reduction of nitrobenzene in aqueous 2-propanol with acetic acid and sodium hydroxide additions on nickel catalysts was studied. A kinetic description of liquid-phase hydrogenation of azobenzene and phenylhydroxylamine was suggested. A kinetic model was developed. The dependences that characterize the variation of the amounts of the starting compound, reaction product, and absorbed hydrogen during the reaction were calculated. The calculated values were shown to be in satisfactory agreement with the experimental values under different reaction conditions.
A vibration model for centrifugal contactors
NASA Astrophysics Data System (ADS)
Leonard, R. A.; Wasserman, M. O.; Wygmans, D. G.
1992-11-01
Using the transfer matrix method, we created the Excel worksheet 'Beam' for analyzing vibrations in centrifugal contactors. With this worksheet, a user can calculate the first natural frequency of the motor/rotor system for a centrifugal contactor. We determined a typical value for the bearing stiffness (k(sub B)) of a motor after measuring the k(sub B) value for three different motors. The k(sub B) value is an important parameter in this model, but it is not normally available for motors. The assumptions that we made in creating the Beam worksheet were verified by comparing the calculated results with those from a VAX computer program, BEAM IV. The Beam worksheet was applied to several contactor designs for which we have experimental data and found to work well.
Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
1994-01-01
Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.
THE ON-SITE ON-LINE TOOL FOR SITE ASSESSMENT CALCULATIONS
State and Federal Agency personnel often receive modeling reports with undocumented parameter values. The reports give parameter values, but often no indication if the value was measured, taken from the literature, the result of calibration, or some type of estimate. Recent examp...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR...
Photolysis Rate Coefficient Calculations in Support of SOLVE II
NASA Technical Reports Server (NTRS)
Swartz, William H.
2005-01-01
A quantitative understanding of photolysis rate coefficients (or "j-values") is essential to determining the photochemical reaction rates that define ozone loss and other crucial processes in the atmosphere. j-Values can be calculated with radiative transfer models, derived from actinic flux observations, or inferred from trace gas measurements. The primary objective of the present effort was the accurate calculation of j-values in the Arctic twilight along NASA DC-8 flight tracks during the second SAGE III Ozone Loss and Validation Experiment (SOLVE II), based in Kiruna, Sweden (68 degrees N, 20 degrees E) during January-February 2003. The JHU/APL radiative transfer model was utilized to produce a large suite of j-values for photolysis processes (over 70 reactions) relevant to the upper troposphere and lower stratosphere. The calculations take into account the actual changes in ozone abundance and apparent albedo of clouds and the Earth surface along the aircraft flight tracks as observed by in situ and remote sensing platforms (e.g., EP-TOMS). A secondary objective was to analyze solar irradiance data from NCAR s Direct beam Irradiance Atmospheric Spectrometer (DIAS) on-board the NASA DC-8 and to start the development of a flexible, multi-species spectral fitting technique for the independent retrieval of O3,O2.02, and aerosol optical properties.
Artificial Neural Network L* from different magnetospheric field models
NASA Astrophysics Data System (ADS)
Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.
2011-12-01
The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.
NASA Astrophysics Data System (ADS)
Findlay, R. P.; Dimbylow, P. J.
2006-05-01
Finite-difference time-domain (FDTD) calculations have been performed to investigate the frequency dependence of the specific energy absorption rate (SAR) in a seated voxel model of the human body. The seated model was derived from NORMAN (NORmalized MAN), an anatomically realistic voxel phantom in the standing posture with arms to the side. Exposure conditions included both vertically and horizontally polarized plane wave electric fields between 10 MHz and 3 GHz. The resolution of the voxel model was 4 mm for frequencies up to 360 MHz and 2 mm for calculations in the higher frequency range. The reduction in voxel size permitted the calculation of SAR at these higher frequencies using the FDTD method. SAR values have been calculated for the seated adult phantom and scaled versions representing 10-, 5- and 1-year-old children under isolated and grounded conditions. These scaled models do not exactly reproduce the dimensions and anatomy of children, but represent good geometric information for a seated child. Results show that, when the field is vertically polarized, the sitting position causes a second, smaller resonance condition not seen in resonance curves for the phantom in the standing posture. This occurs at ~130 MHz for the adult model when grounded. Partial-body SAR calculations indicate that the upper and lower regions of the body have their own resonant frequency at ~120 MHz and ~160 MHz, respectively, when the grounded adult model is orientated in the sitting position. These combine to produce this second resonance peak in the whole-body averaged SAR values calculated. Two resonance peaks also occur for the sitting posture when the incident electric field is horizontally polarized. For the adult model, the peaks in the whole-body averaged SAR occur at ~180 and ~600 MHz. These peaks are due to resonance in the arms and feet, respectively. Layer absorption plots and colour images of SAR in individual voxels show the specific regions in which the seated human body absorbs the incident field. External electric field values required to produce the ICNIRP basic restrictions were derived from SAR calculations and compared with ICNIRP reference levels. This comparison shows that the reference levels provide a conservative estimate of the ICNIRP whole-body averaged SAR restriction, with the exception of the region above 1.4 GHz for the scaled 1-year-old model.
Findlay, R P; Dimbylow, P J
2006-05-07
Finite-difference time-domain (FDTD) calculations have been performed to investigate the frequency dependence of the specific energy absorption rate (SAR) in a seated voxel model of the human body. The seated model was derived from NORMAN (NORmalized MAN), an anatomically realistic voxel phantom in the standing posture with arms to the side. Exposure conditions included both vertically and horizontally polarized plane wave electric fields between 10 MHz and 3 GHz. The resolution of the voxel model was 4 mm for frequencies up to 360 MHz and 2 mm for calculations in the higher frequency range. The reduction in voxel size permitted the calculation of SAR at these higher frequencies using the FDTD method. SAR values have been calculated for the seated adult phantom and scaled versions representing 10-, 5- and 1-year-old children under isolated and grounded conditions. These scaled models do not exactly reproduce the dimensions and anatomy of children, but represent good geometric information for a seated child. Results show that, when the field is vertically polarized, the sitting position causes a second, smaller resonance condition not seen in resonance curves for the phantom in the standing posture. This occurs at approximately 130 MHz for the adult model when grounded. Partial-body SAR calculations indicate that the upper and lower regions of the body have their own resonant frequency at approximately 120 MHz and approximately 160 MHz, respectively, when the grounded adult model is orientated in the sitting position. These combine to produce this second resonance peak in the whole-body averaged SAR values calculated. Two resonance peaks also occur for the sitting posture when the incident electric field is horizontally polarized. For the adult model, the peaks in the whole-body averaged SAR occur at approximately 180 and approximately 600 MHz. These peaks are due to resonance in the arms and feet, respectively. Layer absorption plots and colour images of SAR in individual voxels show the specific regions in which the seated human body absorbs the incident field. External electric field values required to produce the ICNIRP basic restrictions were derived from SAR calculations and compared with ICNIRP reference levels. This comparison shows that the reference levels provide a conservative estimate of the ICNIRP whole-body averaged SAR restriction, with the exception of the region above 1.4 GHz for the scaled 1-year-old model.
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
2009-11-17
set of chains , the step adds scheduled methods that have an a priori likelihood of a failure outcome (Lines 3-5). It identifies the max eul value of the...activity meeting its objective, as well as its expected contribution to the schedule. By explicitly calculating these values , PADS is able to summarize the...variables. One of the main difficulties of this model is convolving the probability density functions and value functions while solving the model; this
Hakin, A W; Hedwig, G R
2001-02-15
A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.
NASA Technical Reports Server (NTRS)
Jenkins, J. M.
1979-01-01
A laboratory heating test simulating hypersonic heating was conducted on a heat-sink type structure to provide basic thermal stress measurements. Six NASTRAN models utilizing various combinations of bar, shear panel, membrane, and plate elements were used to develop calculated thermal stresses. Thermal stresses were also calculated using a beam model. For a given temperature distribution there was very little variation in NASTRAN calculated thermal stresses when element types were interchanged for a given grid system. Thermal stresses calculated for the beam model compared similarly to the values obtained for the NASTRAN models. Calculated thermal stresses compared generally well to laboratory measured thermal stresses. A discrepancy of signifiance occurred between the measured and predicted thermal stresses in the skin areas. A minor anomaly in the laboratory skin heating uniformity resulted in inadequate temperature input data for the structural models.
Pricing of premiums for equity-linked life insurance based on joint mortality models
NASA Astrophysics Data System (ADS)
Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.
2018-03-01
Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
Sepúlveda, P O; Mora, X
2012-12-01
The first order plasma-effect-site equilibration rate constant (k(e0)) links the pharmacokinetics (PK) and pharmacodynamics (PD) of a given drug. This constant, calculated for each specific PK drug model, allowed us to predict the course of the effect in a target controlled infusion (TCI). The PK-PD model of propofol, published by Schnider et al., calculated a k(e0) value of 0.456min(-1) and a corresponding time to peak effect (t peak) of 1.6min. The aim of this study was to reevaluate the k(e0) value for the predicted Schnider model of propofol, with data from a complete effect curve obtained by monitoring the bispectral index (BIS). The study included 35 healthy adult patients (18-90 years) scheduled for elective surgery with standard monitoring and using the BIS XP(®) (Aspect), and who received a propofol infusion to reach a plasma target of 12 μg/ml in 4min. The infusion was then stopped, obtaining a complete effect curve when the patient woke up. The Anestfusor™ (University of Chile) software was used to control the infusion pumps, calculate the plasma concentration plotted by Schnider PK model, and to store the BIS data every second. Loss (LOC) and recovery (ROC) of consciousness was assessed and recorded. Using a traditional parametric method using the "k(e0) Objective function" of the PK-PD tools for Excel, the individual and population k(e0) was calculated. Predictive Smith tests (Pk) and Student t test were used for statistical analysis. A P<.05 indicated significance. The evaluation included 21 male and 14 female patients (18 to 90 years). We obtained 1,001 (±182) EEG data and the corresponding calculated plasma concentration for each case. The population k(e0) obtained was 0.144min(-1) (SD±0.048), very different from the original model (P<.001). This value corresponds with a t peak of 2.45min. The predictive performance (Pk) for the new model was 0.9 (SD±0.03), but only 0.78 (SD±0.06) for the original (P<.001). With a baseline BIS of 95.8 (SD±2.34), the BIS at LOC was 77.48 (SD±9.6) and 74.65(SD±6.3) at ROC (P=.027). The calculated Ce in the original model at LOC and ROC were 5.9 (SD±1.35)/1.08 μg/ml (SD±0.32) (P<.001), respectively, and 2.3 (SD±0.63)/2.0 μg/ml (SD±0.65) (NS) for the new model. The values between LOC/ROC were significantly different between the 2 models (P<.001). No differences in k(e0) value were found between males and females, but in the new model the k(e0) was affected by age as a covariable (0.26-[age×0.0022]) (P<.05). The dynamic relationship between propofol plasma concentrations predicted by Schnider's pharmacokinetic model and its hypnotic effect measured with BIS was better characterized with a smaller k(e0) value (slower t½k(e0)) than that present in the original model, with an age effect also not described before. Copyright © 2011 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.
Correct fair market value calculation needed to avoid regulatory challenges.
Dietrich, M O
1997-09-01
In valuing a physician practice for acquisition, it is important for buyers and sellers to distinguish between fair market value and strategic value. Although many buyers would willingly pay for the strategic value of a practice, tax-exempt buyers are required by law to consider only the fair market value in setting a bid price. Valuators must adjust group earnings to exclude items that do not apply to any willing seller and include items that do apply to any willing seller to arrive at the fair market value of the practice. In addition, the weighted average cost of capital (WACC), which becomes the discount rate in the valuation model, is critical to the measure of value of the practice. Small medical practices are assumed to have few hard assets and little long-term debt, and the WACC is calculated on the basis of those assumptions. When a small practice has considerable debt, however, this calculated WACC may be inappropriate for valuing the practice. In every case, evidence that shows that a transaction has been negotiated "at arm's length" should stave off any regulatory challenge.
Quiet High Speed Fan (QHSF) Flutter Calculations Using the TURBO Code
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Min, James B.; Mehmed, Oral
2006-01-01
A scale model of the NASA/Honeywell Engines Quiet High Speed Fan (QHSF) encountered flutter wind tunnel testing. This report documents aeroelastic calculations done for the QHSF scale model using the blade vibration capability of the TURBO code. Calculations at design speed were used to quantify the effect of numerical parameters on the aerodynamic damping predictions. This numerical study allowed the selection of appropriate values of these parameters, and also allowed an assessment of the variability in the calculated aerodynamic damping. Calculations were also done at 90 percent of design speed. The predicted trends in aerodynamic damping corresponded to those observed during testing.
SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge
Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.
2010-01-01
A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.
Development of an inpatient operational pharmacy productivity model.
Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M
2015-02-01
An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Ozone Depletion Potential of CH3Br
NASA Technical Reports Server (NTRS)
Sander, Stanley P.; Ko, Malcolm K. W.; Sze, Nien Dak; Scott, Courtney; Rodriquez, Jose M.; Weisenstein, Debra K.
1998-01-01
The ozone depletion potential (ODP) of methyl bromide (CH3Br) can be determined by combining the model-calculated bromine efficiency factor (BEF) for CH3Br and its atmospheric lifetime. This paper examines how changes in several key kinetic data affect BEF. The key reactions highlighted in this study include the reaction of BrO + H02, the absorption cross section of HOBr, the absorption cross section and the photolysis products of BrON02, and the heterogeneous conversion of BrON02 to HOBR and HN03 on aerosol particles. By combining the calculated BEF with the latest estimate of 0.7 year for the atmospheric lifetime of CH3Br, the likely value of ODP for CH3Br is 0.39. The model-calculated concentration of HBr (approximately 0.3 pptv) in the lower stratosphere is substantially smaller than the reported measured value of about I pptv. Recent publications suggested models can reproduce the measured value if one assumes a yield for HBr from the reaction of BrO + OH or from the reaction of BrO + H02. Although the DeAlore et al. evaluation concluded any substantial yield of HBr from BrO + HO2 is unlikely, for completeness, we calculate the effects of these assumed yields on BEF for CH3Br. Our calculations show that the effects are minimal: practically no impact for an assumed 1.3% yield of HBr from BrO + OH and 10% smaller for an assumed 0.6% yield from BrO + H02.
Ozone Depletion Potential of CH3Br. Appendix H
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Sze, Nien Dak; Scott, Courtney; Rodriguez, Jose M.; Weisenstein, Debra K.; Sander, Stanley P.
1998-01-01
The ozone depletion potential (ODP) of methyl bromide (CH3Br) can be determined by combining the model-calculated bromine efficiency factor (BEF) for CH3Br and its atmospheric lifetime. This paper examines how changes in several key kinetic data affect BEF. The key reactions highlighted in this study include the reaction of BrO + HO2, the absorption cross section of HOBr, the absorption cross section and the photolysis products of BrONO2, and the heterogeneous conversion of BrONO2 to HOBr and HNO3 on aerosol particles. By combining the calculated BEF with the latest estimate of 0.7 year for the atmospheric lifetime of CH3Br, the likely value of ODP for CH3Br is 0.39. The model-calculated concentration of HBr (approx. 0.3 pptv) in the lower stratosphere is substantially smaller than the reported measured value of about 1 pptv. Recent publications suggested models can reproduce the measured value if one assumes a yield for HBr from the reaction of BrO + OH or from the reaction of BrO + HO2. Although the evaluation concluded any substantial yield of HBr from BrO + HO2 is unlikely, for completeness, we calculate the effects of these assumed yields on BEF for CH3Br. Our calculations show that the effects are minimal: practically no impact for an assumed 1.3% yield of HBr from BrO + OH and 10% smaller for an assumed 0.6% yield from BrO + HO2.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Element distributions after binary fission of /sup 44/Ti
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pl-dash-baraneta, R.; Belery, P.; Brzychczyk, J.
1986-08-01
Inclusive and coincidence measurements have been performed to study symmetric fragmentation of /sup 44/Ti binary decay from the /sup 32/S+/sup 12/C reaction at 280 MeV incident energy. Element distributions after binary decay were measured. Angular distributions and fragment correlations are presented. Total c.m. kinetic energy for the symmetric products is extracted from our data and from Monte-Carlo model calculations including Q-italic-value fluctuations. This result was compared to liquid drop model calculations and standard fission systematics. Comparison between the experimental value of the total kinetic energy and the rotating liquid-drop model predictions locates the angular momentum window for symmetric splitting ofmore » /sup 44/Ti between 33h-dash-bar and 38h-dash-bar. It also showed that 50% of the corresponding rotational energy contributes to the total kinetic energy values. The dominant reaction mechanism was found to be symmetric splitting followed by evaporation.« less
Jockusch, Rebecca A.; Williams*, Evan R.
2005-01-01
The dissociation kinetics of protonated n-acetyl-L-alanine methyl ester dimer (AcAlaMEd), imidazole dimer, and their cross dimer were measured using blackbody infrared radiative dissociation (BIRD). Master equation modeling of these data was used to extract threshold dissociation energies (Eo) for the dimers. Values of 1.18 ± 0.06, 1.11 ± 0.04, and 1.12 ± 0.08 eV were obtained for AcAlaMEd, imidazole dimer, and the cross dimer, respectively. Assuming that the reverse activation barrier for dissociation of the ion–molecule complex is negligible, the value of Eo can be compared to the dissociation enthalpy (ΔHd°) from HPMS data. The Eo values obtained for the imidazole dimer and the cross dimer are in agreement with HPMS values; the value for AcAlaMEd is somewhat lower. Radiative rate constants used in the master equation modeling were determined using transition dipole moments calculated at the semiempirical (AM1) level for all dimers and compared to ab initio (RHF/3-21G*) calculations where possible. To reproduce the experimentally measured dissociation rates using master equation modeling, it was necessary to multiply semiempirical transition dipole moments by a factor between 2 and 3. Values for transition dipole moments from the ab initio calculations could be used for two of the dimers but appear to be too low for AcAlaMEd. These results demonstrate that BIRD, in combination with master equation modeling, can be used to determine threshold dissociation energies for intermediate size ions that are in neither the truncated Boltzmann nor the rapid energy exchange limit. PMID:16604163
Ballance, Simon; Sahlstrøm, Stefan; Lea, Per; Nagy, Nina E; Andersen, Petter V; Dessev, Tzvetelin; Hull, Sarah; Vardakou, Maria; Faulks, Richard
2013-03-01
To identify the key parameters involved in cereal starch digestion and associated glycaemic response by the utilisation of a dynamic gastro-duodenal digestion model. Potential plasma glucose loading curves for each meal were calculated and fitted to an exponential function. The area under the curve (AUC) from 0 to 120 min and total digestible starch was used to calculate an in vitro glycaemic index (GI) value normalised against white bread. Microscopy was additionally used to examine cereal samples collected in vitro at different stages of gastric and duodenal digestion. Where in vivo GI data were available (4 out of 6 cereal meals) no significant difference was observed between these values and the corresponding calculated in vitro GI value. It is possible to simulate an in vivo glycaemic response for cereals when the gastric emptying rate (duodenal loading) and kinetics of digestible starch hydrolysis in the duodenum are known.
School system evaluation by value added analysis under endogeneity.
Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien
2014-01-01
Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.
Russo, Giacomo; Grumetto, Lucia; Barbato, Francesco; Vistoli, Giulio; Pedretti, Alessandro
2017-03-01
The present study proposes a method for an in silico calculation of phospholipophilicity. Phospholipophilicity is intended as the measure of analyte affinity for phospholipids; it is currently assessed by HPLC measures of analyte retention on phosphatidylcholine-like stationary phases (IAM - Immobilized Artificial Membrane) resulting in log k W IAM values. Due to the amphipathic and electrically charged nature of phospholipids, retention on these stationary phases results from complex mechanisms, being affected not only by lipophilicity (as measured by n-octanol/aqueous phase partition coefficients, log P) but also by the occurrence of polar and/or electrostatic intermolecular interaction forces. Differently from log P, to date no method has been proposed for in silico calculation of log k W IAM . The study is aimed both at shedding new light into the retention mechanism on IAM stationary phases and at offering a high-throughput method to achieve such values. A wide set of physico-chemical and topological properties were taken into account, yielding a robust final model including four in silico calculated parameters (lipophilicity, hydrophilic/lipophilic balance, molecular size, and molecule flexibility). The here presented model was based on the analysis of 205 experimentally determined values, taken from the literature and measured by a single research group to minimize the interlaboratory variability; such model is able to predict phospholipophilicity values on both the two IAM stationary phases to date marketed, i.e. IAM.PC.MG and IAM.PC.DD2, with a fairly good degree (r 2 =0.85) of accuracy. The present work allowed the development of a free on-line service aimed at calculating log k W IAM values of any molecule included in the PubChem database, which is freely available at http://nova.disfarm.unimi.it/logkwiam.htm. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.; Kim, H.
1995-03-01
Sulfolane is widely used as a solvent for the extraction of aromatic hydrocarbons. Ternary phase equilibrium data are essential for the proper understanding of the solvent extraction process. Liquid-liquid equilibrium data for the systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene were determined at 298.15, 308.15, and 318.15 K. Tie line data were satisfactorily correlated by the Othmer and Tobias method. The experimental data were compared with the values calculated by the UNIQUAC and NRTL models. Good quantitative agreement was obtained with these models. However, the calculated values based on themore » NRTL model were found to be better than those based on the UNIQUAC model.« less
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
NASA Astrophysics Data System (ADS)
Dimbylow, Peter
2005-09-01
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
Dimbylow, Peter
2005-09-07
Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.
Improving deep convolutional neural networks with mixed maxout units
Liu, Fu-xian; Li, Long-yue
2017-01-01
Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737
NASA Astrophysics Data System (ADS)
Sugawara, M.
2018-05-01
An empirical model with independent variable moments of inertia for triaxial nuclei is devised and applied to 76Ge and 192Os. Three intrinsic moments of inertia, J1, J2, and J3, are varied independently as a particular function of spin I within a revised version of the triaxial rotor model so as to reproduce the energy levels of the ground-state, γ , and (in the case of 192Os) Kπ=4+ bands. The staggering in the γ band is well reproduced in both phase and amplitude. Effective γ values are extracted as a function of spin I from the ratios of the three moments of inertia. The eigenfunctions and the effective γ values are subsequently used to calculate the ratios of B (E 2 ) values associated with these bands. Good agreement between the model calculation and the experimental data is obtained for both 76Ge and 192Os.
NASA Astrophysics Data System (ADS)
Bazavov, A.; Bhattacharya, Tanmoy; DeTar, C. E.; Ding, H.-T.; Gottlieb, Steven; Gupta, Rajan; Hegde, P.; Heller, Urs M.; Karsch, F.; Laermann, E.; Levkova, L.; Mukherjee, Swagato; Petreczky, P.; Schmidt, Christian; Soltz, R. A.; Soeldner, W.; Sugar, R.; Vranas, Pavlos M.
2012-08-01
We calculate the quadratic fluctuations of net baryon number, electric charge and strangeness as well as correlations among these conserved charges in (2+1)-flavor lattice QCD at zero chemical potential. Results are obtained using calculations with tree-level improved gauge and the highly improved staggered quark actions with almost physical light and strange quark masses at three different values of the lattice cutoff. Our choice of parameters corresponds to a value of 160 MeV for the lightest pseudoscalar Goldstone mass and a physical value of the kaon mass. The three diagonal charge susceptibilities and the correlations among conserved charges have been extrapolated to the continuum limit in the temperature interval 150MeV≤T≤250MeV. We compare our results with the hadron resonance gas (HRG) model calculations and find agreement with HRG model results only for temperatures T≲150MeV. We observe significant deviations in the temperature range 160MeV≲T≲170MeV and qualitative differences in the behavior of the three conserved charge sectors. At T≃160MeV quadratic net baryon number fluctuations in QCD agree with HRG model calculations, while the net electric charge fluctuations in QCD are about 10% smaller and net strangeness fluctuations are about 20% larger. These findings are relevant to the discussion of freeze-out conditions in relativistic heavy ion collisions.
Measurement of multiple scattering of 13 and 20 MeV electrons by thin foils
Ross, C. K.; McEwen, M. R.; McDonald, A. F.; Cojocaru, C. D.; Faddegon, B. A.
2008-01-01
To model the transport of electrons through material requires knowledge of how the electrons lose energy and scatter. Theoretical models are used to describe electron energy loss and scatter and these models are supported by a limited amount of measured data. The purpose of this work was to obtain additional data that can be used to test models of electron scattering. Measurements were carried out using 13 and 20 MeV pencil beams of electrons produced by the National Research Council of Canada research accelerator. The electron fluence was measured at several angular positions from 0° to 9° for scattering foils of different thicknesses and with atomic numbers ranging from 4 to 79. The angle, θ1∕e, at which the fluence has decreased to 1∕e of its value on the central axis was used to characterize the distributions. Measured values of θ1∕e ranged from 1.5° to 8° with a typical uncertainty of about 1%. Distributions calculated using the EGSnrc Monte Carlo code were compared to the measured distributions. In general, the calculated distributions are narrower than the measured ones. Typically, the difference between the measured and calculated values of θ1∕e is about 1.5%, with the maximum difference being 4%. The measured and calculated distributions are related through a simple scaling of the angle, indicating that they have the same shape. No significant trends with atomic number were observed. PMID:18841865
Sato, Y; Wadamoto, M; Tsuga, K; Teixeira, E R
1999-04-01
More validity of finite element analysis in implant biomechanics requires element downsizing. However, excess downsizing needs computer memory and calculation time. To investigate the effectiveness of element downsizing on the construction of a three-dimensional finite element bone trabeculae model, with different element sizes (600, 300, 150 and 75 microm) models were constructed and stress induced by vertical 10 N loading was analysed. The difference in von Mises stress values between the models with 600 and 300 microm element sizes was larger than that between 300 and 150 microm. On the other hand, no clear difference of stress values was detected among the models with 300, 150 and 75 microm element sizes. Downsizing of elements from 600 to 300 microm is suggested to be effective in the construction of a three-dimensional finite element bone trabeculae model for possible saving of computer memory and calculation time in the laboratory.
NASA Astrophysics Data System (ADS)
Mahant, A. K.; Rao, P. S.; Misra, S. C.
1994-07-01
In the calculational model developed by Warren and Shah for the computation of the gamma sensitivity ( Sγ) it has been observed that the computed Sγ value is quite sensitive to the space charge distribution function assumed for the insulator region and the energy of the gamma photons. The Sγ of SPNDs with Pt, Co and V emitters (manufactured by Thermocoax, France) has been measured at 60Co photon energy and a good correlation between the measured and computed values has been obtained using a composite space charge density function (CSCD), the details of which are presented in this paper. The arguments are extended for evaluating the Sγ values of several SPNDs for which Warren and Shah reported the measured values for a prompt fission gamma spectrum obtained in a swimming pool reactor. These results are also discussed.
Betavoltaic battery performance: Comparison of modeling and experiment.
Svintsov, A A; Krasnov, A A; Polikarpov, M A; Polyakov, A Y; Yakimov, E B
2018-07-01
A verification of the Monte Carlo simulation software for the prediction of short circuit current value is carried out using the Ni-63 source with the activity of 2.7 mCi/cm 2 and converters based on Si p-i-n diodes and SiC and GaN Schottky diodes. A comparison of experimentally measured and calculated short circuit current values confirms the validity of the proposed modeling method, with the difference in the measured and calculated short circuit current values not exceeding 25% and the error in the predicted output power values being below 30%. Effects of the protective layer formed on the Ni-63 radioactive film and of the passivating film on the semiconductor converters on the energy deposited inside the converters are estimated. The maximum attainable betavoltaic cell parameters are estimated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
Conservative Tests under Satisficing Models of Publication Bias.
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Conservative Tests under Satisficing Models of Publication Bias
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834
Cai, Zhongli; Pignol, Jean-Philippe; Chan, Conrad; Reilly, Raymond M
2010-03-01
Our objective was to compare Monte Carlo N-particle (MCNP) self- and cross-doses from (111)In to the nucleus of breast cancer cells with doses calculated by reported analytic methods (Goddu et al. and Farragi et al.). A further objective was to determine whether the MCNP-predicted surviving fraction (SF) of breast cancer cells exposed in vitro to (111)In-labeled diethylenetriaminepentaacetic acid human epidermal growth factor ((111)In-DTPA-hEGF) could accurately predict the experimentally determined values. MCNP was used to simulate the transport of electrons emitted by (111)In from the cell surface, cytoplasm, or nucleus. The doses to the nucleus per decay (S values) were calculated for single cells, closely packed monolayer cells, or cell clusters. The cell and nucleus dimensions of 6 breast cancer cell lines were measured, and cell line-specific S values were calculated. For self-doses, MCNP S values of nucleus to nucleus agreed very well with those of Goddu et al. (ratio of S values using analytic methods vs. MCNP = 0.962-0.995) and Faraggi et al. (ratio = 1.011-1.024). MCNP S values of cytoplasm and cell surface to nucleus compared fairly well with the reported values (ratio = 0.662-1.534 for Goddu et al.; 0.944-1.129 for Faraggi et al.). For cross doses, the S values to the nucleus were independent of (111)In subcellular distribution but increased with cluster size. S values for monolayer cells were significantly different from those of single cells and cell clusters. The MCNP-predicted SF for monolayer MDA-MB-468, MDA-MB-231, and MCF-7 cells agreed with the experimental data (relative error of 3.1%, -1.0%, and 1.7%). The single-cell and cell cluster models were less accurate in predicting the SF. For MDA-MB-468 cells, relative error was 8.1% using the single-cell model and -54% to -67% using the cell cluster model. Individual cell-line dimensions had large effects on S values and were needed to estimate doses and SF accurately. MCNP simulation compared well with the reported analytic methods in the calculation of subcellular S values for single cells and cell clusters. Application of a monolayer model was most accurate in predicting the SF of breast cancer cells exposed in vitro to (111)In-DTPA-hEGF.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Uncertainty analyses of the calibrated parameter values of a water quality model
NASA Astrophysics Data System (ADS)
Rode, M.; Suhr, U.; Lindenschmidt, K.-E.
2003-04-01
For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.
Investigation of the 3-D actinic flux field in mountainous terrain
Wagner, J.E.; Angelini, F.; Blumthaler, M.; Fitzka, M.; Gobbi, G.P.; Kift, R.; Kreuter, A.; Rieder, H.E.; Simic, S.; Webb, A.; Weihs, P.
2011-01-01
During three field campaigns spectral actinic flux was measured from 290–500 nm under clear sky conditions in Alpine terrain and the associated O3- and NO2-photolysis frequencies were calculated and the measurement products were then compared with 1-D- and 3-D-model calculations. To do this 3-D-radiative transfer model was adapted for actinic flux calculations in mountainous terrain and the maps of the actinic flux field at the surface, calculated with the 3-D-radiative transfer model, are given. The differences between the 3-D- and 1-D-model results for selected days during the campaigns are shown, together with the ratios of the modeled actinic flux values to the measurements. In many cases the 1-D-model overestimates actinic flux by more than the measurement uncertainty of 10%. The results of using a 3-D-model generally show significantly lower values, and can underestimate the actinic flux by up to 30%. This case study attempts to quantify the impact of snow cover in combination with topography on spectral actinic flux. The impact of snow cover on the actinic flux was ~ 25% in narrow snow covered valleys, but for snow free areas there were no significant changes due snow cover in the surrounding area and it is found that the effect snow-cover at distances over 5 km from the point of interest was below 5%. Overall the 3-D-model can calculate actinic flux to the same accuracy as the 1-D-model for single points, but gives a much more realistic view of the surface actinic flux field in mountains as topography and obstruction of the horizon are taken into account. PMID:26412915
Modeling Future Fire danger over North America in a Changing Climate
NASA Astrophysics Data System (ADS)
Jain, P.; Paimazumder, D.; Done, J.; Flannigan, M.
2016-12-01
Fire danger ratings are used to determine wildfire potential due to weather and climate factors. The Fire Weather Index (FWI), part of the Canadian Forest Fire Danger Rating System (CFFDRS), incorporates temperature, relative humidity, windspeed and precipitation to give a daily fire danger rating that is used by wildfire management agencies in an operational context. Studies using GCM output have shown that future wildfire danger will increase in a warming climate. However, these studies are somewhat limited by the coarse spatial resolution (typically 100-400km) and temporal resolution (typically 6-hourly to monthly) of the model output. Future wildfire potential over North America based on FWI is calculated using output from the Weather, Research and Forecasting (WRF) model, which is used to downscale future climate scenarios from the bias-corrected Community Climate System Model (CCSM) under RCP8.5 scenarios at a spatial resolution of 36km. We consider five eleven year time slices: 1990-2000, 2020-2030, 2030-2040, 2050-2060 and 2080-2090. The dynamically downscaled simulation improves determination of future extreme weather by improving both spatial and temporal resolution over most GCM models. To characterize extreme fire weather we calculate annual numbers of spread days (days for which FWI > 19) and annual 99th percentile of FWI. Additionally, an extreme value analysis based on the peaks-over-threshold method allows us to calculate the return values for extreme FWI values.
Recalibration of the Shear Stress Transport Model to Improve Calculation of Shock Separated Flows
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
2013-01-01
The Menter Shear Stress Transport (SST) k . turbulence model is one of the most widely used two-equation Reynolds-averaged Navier-Stokes turbulence models for aerodynamic analyses. The model extends Menter s baseline (BSL) model to include a limiter that prevents the calculated turbulent shear stress from exceeding a prescribed fraction of the turbulent kinetic energy via a proportionality constant, a1, set to 0.31. Compared to other turbulence models, the SST model yields superior predictions of mild adverse pressure gradient flows including those with small separations. In shock - boundary layer interaction regions, the SST model produces separations that are too large while the BSL model is on the other extreme, predicting separations that are too small. In this paper, changing a1 to a value near 0.355 is shown to significantly improve predictions of shock separated flows. Several cases are examined computationally and experimental data is also considered to justify raising the value of a1 used for shock separated flows.
NASA Astrophysics Data System (ADS)
Ismail, M.; Adel, A.
2018-04-01
The α -decay half-lives of the recently synthesized superheavy nuclei (SHN) are investigated by employing the density dependent cluster model. A realistic nucleon-nucleon (NN ) interaction with a finite-range exchange part is used to calculate the microscopic α -nucleus potential in the well-established double-folding model. The calculated potential is then implemented to find both the assault frequency and the penetration probability of the α particle by means of the Wentzel-Kramers-Brillouin (WKB) approximation in combination with the Bohr-Sommerfeld quantization condition. The calculated values of α -decay half-lives of the recently synthesized Og isotopes and its decay products are in good agreement with the experimental data. Moreover, the calculated values of α -decay half-lives have been compared with those values evaluated using other theoretical models, and it was found that our theoretical values match well with their counterparts. The competition between α decay and spontaneous fission is investigated and predictions for possible decay modes for the unknown nuclei 118 290 -298Og are presented. We studied the behavior of the α -decay half-lives of Og isotopes and their decay products as a function of the mass number of the parent nuclei. We found that the behavior of the curves is governed by proton and neutron magic numbers found from previous studies. The proton numbers Z =114 , 116, 108, 106 and the neutron numbers N =172 , 164, 162, 158 show some magic character. We hope that the theoretical prediction of α -decay chains provides a new perspective to experimentalists.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
17 CFR 240.15c3-1a - Options (Appendix A to 17 CFR 240.15c3-1).
Code of Federal Regulations, 2012 CFR
2012-04-01
... terms theoretical gains and losses shall mean the gain and loss in the value of individual option series... paragraph (a)(1)(iii) of this section). Theoretical gains and losses shall be calculated using a theoretical... models. Any such model shall calculate theoretical gains and losses as described in paragraph (a)(1)(i)(A...
NASA Astrophysics Data System (ADS)
Zubov, N. O.; Kaban'kov, O. N.; Yagov, V. V.; Sukomel, L. A.
2017-12-01
Wide use of natural circulation loops operating at low redused pressures generates the real need to develop reliable methods for predicting flow regimes and friction pressure drop for two-phase flows in this region of parameters. Although water-air flows at close-to-atmospheric pressures are the most widely studied subject in the field of two-phase hydrodynamics, the problem of reliably calculating friction pressure drop can hardly be regarded to have been fully solved. The specific volumes of liquid differ very much from those of steam (gas) under such conditions, due to which even a small change in flow quality may cause the flow pattern to alter very significantly. Frequently made attempts to use some or another universal approach to calculating friction pressure drop in a wide range of steam quality values do not seem to be justified and yield predicted values that are poorly consistent with experimentally measured data. The article analyzes the existing methods used to calculate friction pressure drop for two-phase flows at low pressures by comparing their results with the experimentally obtained data. The advisability of elaborating calculation procedures for determining the friction pressure drop and void fraction for two-phase flows taking their pattern (flow regime) into account is demonstrated. It is shown that, for flows characterized by low reduced pressures, satisfactory results are obtained from using a homogeneous model for quasi-homogeneous flows, whereas satisfactory results are obtained from using an annular flow model for flows characterized by high values of void fraction. Recommendations for making a shift from one model to another in carrying out engineering calculations are formulated and tested. By using the modified annular flow model, it is possible to obtain reliable predictions for not only the pressure gradient but also for the liquid film thickness; the consideration of droplet entrainment and deposition phenomena allows reasonable corrections to be introduced into calculations. To the best of the authors' knowledge, it is for the first time that the entrainment of droplets from the film surface is taken into consideration in the dispersed-annular flow model.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell
2011-01-01
Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
The polarization of continuum radiation in sunspots. I - Rayleigh and Thomson scattering
NASA Technical Reports Server (NTRS)
Finn, G. D.; Jefferies, J. T.
1974-01-01
Expressions are derived for the Stokes parameters of light scattered by a layer of free electrons and hydrogen atoms in a sunspot. A physically reasonable sunspot model was found so that the direction of the calculated linear polarization agrees reasonably with observations. The magnitude of the calculated values of the linear polarization agrees generally with values observed in the continuum at 5830 A. Circular polarization in the continuum also accompanies electron scattering in spot regions; however for commonly accepted values of the longitudinal magnetic field, the predicted circular polarization is much smaller than observed.
Surface tension and modeling of cellular intercalation during zebrafish gastrulation.
Calmelet, Colette; Sepich, Diane
2010-04-01
In this paper we discuss a model of zebrafish embryo notochord development based on the effect of surface tension of cells at the boundaries. We study the process of interaction of mesodermal cells at the boundaries due to adhesion and cortical tension, resulting in cellular intercalation. From in vivo experiments, we obtain cell outlines of time-lapse images of cell movements during zebrafish embryo development. Using Cellular Potts Model, we calculate the total surface energy of the system of cells at different time intervals at cell contacts. We analyze the variations of total energy depending on nature of cell contacts. We demonstrate that our model can be viable by calculating the total surface energy value for experimentally observed configurations of cells and showing that in our model these configurations correspond to a decrease in total energy values in both two and three dimensions.
How much crosstalk can be allowed in a stereoscopic system at various grey levels?
NASA Astrophysics Data System (ADS)
Shestak, Sergey; Kim, Daesik; Kim, Yongie
2012-03-01
We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
NASA Astrophysics Data System (ADS)
Semenycheva, Alexandra V.; Chuvil'deev, Vladimir N.; Nokhrin, Aleksey V.
2018-05-01
The paper offers a model describing the process of grain boundary self-diffusion in metals with phase transitions in the solid state. The model is based on ideas and approaches found in the theory of non-equilibrium grain boundaries. The range of application of basic relations contained in this theory is shown to expand, as they can be used to calculate the parameters of grain boundary self-diffusion in high-temperature and low-temperature phases of metals with a phase transition. The model constructed is used to calculate grain boundary self-diffusion activation energy in titanium and zirconium and an explanation is provided as to their abnormally low values in the low-temperature phase. The values of grain boundary self-diffusion activation energy are in good agreement with the experiment.
Strength Calculation of Inclined Sections of Reinforced Concrete Elements under Transverse Bending
NASA Astrophysics Data System (ADS)
Filatov, V. B.
2017-11-01
The authors propose a design model to determine the strength of inclined sections of bent reinforced concrete elements without shear reinforcement for the action of transverse force taking into account the aggregate interlock forces in the inclined crack. The calculated dependences to find out the components of forces acting in an inclined section are presented. The calculated dependences are obtained from the consideration of equilibrium conditions of the block over the inclined crack. A comparative analysis of the experimental values of the failure loads of the inclined section and the theoretical values obtained for the proposed dependencies and normative calculation methods is performed. It is shown that the proposed design model makes it possible to take into account the effect the longitudinal reinforcement percentage has on the inclined section strength, the element cross section height without the introduction of empirical coefficients which contributes to an increase in the structural safety of design solutions including the safety of high-strength concrete elements.
HUMAN BODY SHAPE INDEX BASED ON AN EXPERIMENTALLY DERIVED MODEL OF HUMAN GROWTH
Lebiedowska, Maria K.; Alter, Katharine E.; Stanhope, Steven J.
2009-01-01
Objectives To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5–18 years; to use the derived growth models to establish a new Human Body Shape Index (HBSI) based on natural age related changes in HBS; and to compare various metrics of relative body weight (body mass index, ponderal index, HBSI) in a sample of 5–18 year old children. Study design Non-disabled Polish children (N=847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex with the allometric equation M= miHχ. HBSI and HBSI were calculated separately for girls and boys, using sex-specific values for χ and a general HBSI from combined data. The customary body mass and ponderal indices were calculated and compared to HBSI values. Results The models of growth were M=13.11H2.84 (R2=.90) and M=13.64H2.68 (R2=.91) for girls and boys respectively. HBSI values contained less inherent variability and were influenced least by growth (age and height) than customary indices. Conclusion Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of human body shape formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for the characterization of human body shape in children during the formative growth years. PMID:18154897
Human body shape index based on an experimentally derived model of human growth.
Lebiedowska, Maria K; Alter, Katharine E; Stanhope, Steven J
2008-01-01
To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5 to 18 years; to use these derived growth models to establish a new human body shape index (HBSI) based on natural age-related changes in human body shape (HBS); and to compare various metrics of relative body weight (body mass index [BMI], ponderal index [PI], and HBSI) in a sample of 5- to 18-year-old children. Nondisabled Polish children (n = 847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex using the allometric equation M = m(i) H(chi). HBSI was calculated separately for girls and boys, using sex-specific values for chi and a general HBSI from combined data. The customary BMI and PI were calculated and compared with HBSI values. The models of growth were M = 13.11H(2.84) (R2 = 0.90) for girls and M = 13.64H(2.68) (R2 = 0.91) for boys. HBSI values contained less inherent variability and were less influenced by growth (age and height) compared with BMI and PI. Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of HBS formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for characterizing HBS in children during the formative growth years.
Spray Modelling for Multifuel Engines.
1982-07-01
representation of equation 44. 191 Fig.36 Comparison of calculated and experimental values of 192 Sauter mean diameter. IIIIII~ i ii .. ... .. .I...fuel and the effect of various parameters have been determined experimentally. Gene- ralized expressions have been determined for the calculation of...average properties of velocity, pressure temperature and chemical species concentration. Elkotb 118 used this theory in the calculation of the flow field
A molecular dynamics simulation study of chloroform
NASA Astrophysics Data System (ADS)
Tironi, Ilario G.; van Gunsteren, Wilfred F.
Three different chloroform models have been investigated using molecular dynamics computer simulation. The thermodynamic, structural and dynamic properties of the various models were investigated in detail. In particular, the potential energies, diffusion coefficients and rotational correlation times obtained for each model are compared with experiment. It is found that the theory of rotational Brownian motion fails in describing the rotational diffusion of chloroform. The force field of Dietz and Heinzinger was found to give good overall agreement with experiment. An extended investigation of this chloroform model has been performed. Values are reported for the isothermal compressibility, the thermal expansion coefficient and the constant volume heat capacity. The values agree well with experiment. The static and frequency dependent dielectric permittivity were computed from a 1·2 ns simulation conducted under reaction field boundary conditions. Considering the fact that the model is rigid with fixed partial charges, the static dielectric constant and Debye relaxation time compare well with experiment. From the same simulation the shear viscosity was computed using the off-diagonal elements of the pressure tensor, both via an Einstein type relation and via a Green-Kubo equation. The calculated viscosities show good agreement with experimental values. The excess Helmholtz energy is calculated using the thermodynamic integration technique and simulations of 50 and 80 ps. The value obtained for the excess Helmholtz energy matches the theoretical value within a few per cent.
Small field models with gravitational wave signature supported by CMB data
Brustein, Ramy
2018-01-01
We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608
Nonmarket economic user values of the Florida Keys/Key West
Vernon R. Leeworthy; J. Michael Bowker
1997-01-01
This report provides estimates of the nonmarket economic user values for recreating visitors to the Florida Keys/Key West that participated in natural resource-based activities. Results from estimated travel cost models are presented, including visitorâs responses to prices and estimated per person-trip user values. Annual user values are also calculated and presented...
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Uchida, Takashi; Yakumaru, Masafumi; Nishioka, Keisuke; Higashi, Yoshihiro; Sano, Tomohiko; Todo, Hiroaki; Sugibayashi, Kenji
2016-01-01
We evaluated the effectiveness of a silicone membrane as an alternative to human skin using the skin permeation parameters of chemical compounds. An in vitro permeation study using 15 model compounds was conducted, and permeation parameters comprising permeability coefficient (P), diffusion parameter (DL(-2)), and partition parameter (KL) were calculated from each permeation profile. Significant correlations were obtained in log P, log DL(-2), and log KL values between the silicone membrane and human skin. DL(-2) values of model compounds, except flurbiprofen, in the silicone membrane were independent of the lipophilicity of the model compounds and were 100-fold higher than those in human skin. For antipyrine and caffeine, which are hydrophilic, KL values in the silicone membrane were 100-fold lower than those in human skin, and P values, calculated as the product of a DL(-2) and KL, were similar. For lipophilic compounds, such as n-butyl paraben and flurbiprofen, KL values for silicone were similar to or 10-fold higher than those in human skin, and P values for silicone were 100-fold higher than those in human skin. Furthermore, for amphiphilic compounds with log Ko/w values from 0.5 to 3.5, KL values in the silicone membrane were 10-fold lower than those in human skin, and P values for silicone were 10-fold higher than those in human skin. The silicone membrane was useful as a human skin alternative in an in vitro skin permeation study. However, depending on the lipophilicity of the model compounds, some parameters may be over- or underestimated.
Value-Added Results for Public Virtual Schools in California
ERIC Educational Resources Information Center
Ford, Richard; Rice, Kerry
2015-01-01
The objective of this paper is to present value-added calculation methods that were applied to determine whether online schools performed at the same or different levels relative to standardized testing. This study includes information on how we approached our value added model development and the results for 32 online public high schools in…
Dose-Response Calculator for ArcGIS
Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.
2011-01-01
The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.
NASA Astrophysics Data System (ADS)
Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.
2017-01-01
Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.
Wang, Peng; Fang, Weining; Guo, Beiyuan
2017-04-01
This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryukhin, V. V., E-mail: bryuhin@yandex.ru; Kurakin, K. Yu.; Uvakin, M. A.
The article covers the uncertainty analysis of the physical calculations of the VVER reactor core for different meshes of the reference values of the feedback parameters (FBP). Various numbers of nodes of the parametric axes of FBPs and different ranges between them are investigated. The uncertainties of the dynamic calculations are analyzed using RTS RCCA ejection as an example within the framework of the model with the boundary conditions at the core inlet and outlet.
NASA Technical Reports Server (NTRS)
Burris, John; McGee, Thomas J.; Hoegy, Walt; Lait, Leslie; Sumnicht, Grant; Twigg, Larry; Heaps, William
2000-01-01
Temperature profiles acquired by Goddard Space Flight Center's AROTEL lidar during the SOLVE mission onboard NASA's DC-8 are compared with predicted values from several atmospheric models (DAO, NCEP and UKMO). The variability in the differences between measured and calculated temperature fields was approximately 5 K. Retrieved temperatures within the polar vortex showed large regions that were significantly colder than predicted by the atmospheric models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Civitarese, Osvaldo; Suhonen, Jouni
In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double β{sup −} decays (0νβ{sup −}β{sup −} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyväskylä-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)
Sharma, Ity; Kaminski, George A
2017-01-15
Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Cooper, Justin; Marx, Bernd; Buhl, Johannes; Hombach, Volker
2002-09-01
This paper investigates the minimum distance for a human body in the near field of a cellular telephone base station antenna for which there is compliance with the IEEE or ICNIRP threshold values for radio frequency electromagnetic energy absorption in the human body. First, local maximum specific absorption rates (SARs), measured and averaged over volumes equivalent to 1 and to 10 g tissue within the trunk region of a physical, liquid filled shell phantom facing and irradiated by a typical GSM 900 base station antenna, were compared to corresponding calculated SAR values. The calculation used a homogeneous Visible Human body model in front of a simulated base station antenna of the same type. Both real and simulated base station antennas operated at 935 MHz. Antenna-body distances were between 1 and 65 cm. The agreement between measurements and calculations was excellent. This gave confidence in the subsequent calculated SAR values for the heterogeneous Visible Human model, for which each tissue was assigned the currently accepted values for permittivity and conductivity at 935 MHz. Calculated SAR values within the trunk of the body were found to be about double those for the homogeneous case. When the IEEE standard and the ICNIRP guidelines are both to be complied with, the local SAR averaged over 1 g tissue was found to be the determining parameter. Emitted power values from the antenna that produced the maximum SAR value over 1 g specified in the IEEE standard at the base station are less than those needed to reach the ICNIRP threshold specified for the local SAR averaged over 10 g. For the GSM base station antenna investigated here operating at 935 MHz with 40 W emitted power, the model indicates that the human body should not be closer to the antenna than 18 cm for controlled environment exposure, or about 95 cm for uncontrolled environment exposure. These safe distance limits are for SARs averaged over 1 g tissue. The corresponding safety distance limits under the ICNIRP guidelines for SAR taken over 10 g tissue are 5 cm for occupational exposure and about 75 cm for general-public exposure. Copyright 2002 Wiley-Liss, Inc.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
Determining spherical lens correction for astronaut training underwater.
Porter, Jason; Gibson, C Robert; Strauss, Samuel
2011-09-01
To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of <0.25 D between values. We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.
Determining spherical lens correction for astronaut training underwater
Porter, Jason; Gibson, C. Robert; Strauss, Samuel
2013-01-01
Purpose To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration (NASA) astronauts while training underwater. The replica space suit’s helmet contains curved visors that induce refractive power when submersed in water. Methods Anterior surface powers and thicknesses were measured for the helmet’s protective and inside visors. The impact of each visor on the helmet’s refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet’s total induced spherical power underwater and the astronaut’s manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. Results The helmet visors induced a total power of −2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (R = 0.971) with 70% of eyes having a difference in magnitude of < 0.25 D between values. Conclusions We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater. PMID:21623249
Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany
We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heifetz, Alexander; Vilim, Richard
Super-critical carbon dioxide (S-CO2) is a promising thermodynamic cycle for advanced nuclear reactors and solar energy conversion applications. Dynamic control of the proposed recompression S-CO2 cycle is accomplished with input from resistance temperature detector (RTD) measurements of the process fluid. One of the challenges in practical implementation of S-CO2 cycle is high corrosion rate of component and sensor materials. In this paper, we develop a mathematical model of RTD sensing using eigendecomposition model of radial heat transfer in a layered long cylinder. We show that the value of RTD time constant primarily depends on the rate of heat transfer frommore » the fluid to the outer wall of RTD. We also show that for typical material properties, RTD time constant can be calculated as the sum of reciprocal eigen-values of the heat transfer matrix. Using the computational model and a set of RTD and CO2 fluid thermophysical parameter values, we calculate the value of time constant of thermowell-mounted RTD sensor at the hot side of the precooler in the S-CO2 cycle. The eigendecomposition model of RTD will be used in future studies to model sensor degradation and its impact on control of S-CO2. (C) 2016 Elsevier B.V. All rights reserved.« less
NASA Astrophysics Data System (ADS)
Shevenell, Lisa
1999-03-01
Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics topography. The general methods described here could be used to estimate regional PET in other arid western states (e.g. New Mexico, Arizona, Utah) and arid regions world-wide (e.g. parts of Africa).
Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk
2004-01-22
Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.
The importance of the external potential on group electronegativity.
Leyssens, Tom; Geerlings, Paul; Peeters, Daniel
2005-11-03
The electronegativity of groups placed in a molecular environment is obtained using CCSD calculations of the electron affinity and ionization energy. A point charge model is used as an approximation of the molecular environment. The electronegativity values obtained in the presence of a point charge model are compared to the isolated group property to estimate the importance of the external potential on the group's electronegativity. The validity of the "group in molecule" electronegativities is verified by comparing EEM (electronegativity equalization method) charge transfer values to the explicitly calculated natural population analysis (NPA) ones, as well as by comparing the variation in electronegativity between the isolated functional group and the functional group in the presence of a modeled environment with the variation based on a perturbation expansion of the chemical potential.
Climate patterns as predictors of amphibians species richness and indicators of potential stress
Battaglin, W.; Hay, L.; McCabe, G.; Nanjappa, P.; Gallant, Alisa L.
2005-01-01
Amphibians occupy a range of habitats throughout the world, but species richness is greatest in regions with moist, warm climates. We modeled the statistical relations of anuran and urodele species richness with mean annual climate for the conterminous United States, and compared the strength of these relations at national and regional levels. Model variables were calculated for county and subcounty mapping units, and included 40-year (1960-1999) annual mean and mean annual climate statistics, mapping unit average elevation, mapping unit land area, and estimates of anuran and urodele species richness. Climate data were derived from more than 7,500 first-order and cooperative meteorological stations and were interpolated to the mapping units using multiple linear regression models. Anuran and urodele species richness were calculated from the United States Geological Survey's Amphibian Research and Monitoring Initiative (ARMI) National Atlas for Amphibian Distributions. The national multivariate linear regression (MLR) model of anuran species richness had an adjusted coefficient of determination (R2) value of 0.64 and the national MLR model for urodele species richness had an R2 value of 0.45. Stratifying the United States by coarse-resolution ecological regions provided models for anUrans that ranged in R2 values from 0.15 to 0.78. Regional models for urodeles had R2 values. ranging from 0.27 to 0.74. In general, regional models for anurans were more strongly influenced by temperature variables, whereas precipitation variables had a larger influence on urodele models.
2008-03-01
49 Figure 3.6 SDVF for MVM ........................................................................................... 53 Figure 3.7 SDVF...3.6, to calculate the value earned by each mission, MVM . This calculation is as follows: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − −= MWorstMBest MM MMBest M...MOMO ZMO MV ,, ,1 For all missions, M, (1 thru i) Where MVM = earned value for a given M, MOBest,M = Best case score for a given M
NASA Astrophysics Data System (ADS)
Khalaf, A. M.; Khalifa, M. M.; Solieman, A. H. M.; Comsan, M. N. H.
2018-01-01
Owing to its doubly magic nature having equal numbers of protons and neutrons, the 40Ca nuclear scattering can be successfully described by the optical model that assumes a spherical nuclear potential. Therefore, optical model analysis was employed to calculate the elastic scattering cross section for p +40Ca interaction at energies from 9 to 22 MeV as well as the polarization at energies from 10 to 18.2 MeV. New optical model parameters (OMPs) were proposed based on the best fitting to experimental data. It is found that the best fit OMPs depend on the energy by smooth relationships. The results were compared with other OMPs sets regarding their chi square values (χ2). The obtained OMP's set was used to calculate the volume integral of the potentials and the root mean square (rms) value of nuclear matter radius of 40Ca. In addition, 40Ca bulk nuclear matter properties were discussed utilizing both the obtained rms radius and the Thomas-Fermi rms radius calculated using spherical Hartree-Fock formalism employing Skyrme type nucleon-nucleon force. The nuclear scattering SCAT2000 FORTRAN code was used for the optical model analysis.
Activities of NASA's Global Modeling Initiative (GMI) in the Assessment of Subsonic Aircraft Impact
NASA Technical Reports Server (NTRS)
Rodriquez, J. M.; Logan, J. A.; Rotman, D. A.; Bergmann, D. J.; Baughcum, S. L.; Friedl, R. R.; Anderson, D. E.
2004-01-01
The Intergovernmental Panel on Climate Change estimated a peak increase in ozone ranging from 7-12 ppbv (zonal and annual average, and relative to a baseline with no aircraft), due to the subsonic aircraft in the year 2015, corresponding to aircraft emissions of 1.3 TgN/year. This range of values presumably reflects differences in model input (e.g., chemical mechanism, ground emission fluxes, and meteorological fields), and algorithms. The model implemented by the Global Modeling Initiative allows testing the impact of individual model components on the assessment calculations. We present results of the impact of doubling the 1995 aircraft emissions of NOx, corresponding to an extra 0.56 TgN/year, utilizing meteorological data from NASA's Data Assimilation Office (DAO), the Goddard Institute for Space Studies (GISS), and the Middle Atmosphere Community Climate Model, version 3 (MACCM3). Comparison of results to observations can be used to assess the model performance. Peak ozone perturbations ranging from 1.7 to 2.2 ppbv of ozone are calculated using the different fields. These correspond to increases in total tropospheric ozone ranging from 3.3 to 4.1 Tg/Os. These perturbations are consistent with the IPCC results, due to the difference in aircraft emissions. However, the range of values calculated is much smaller than in IPCC.
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
Tension fatigue of glass/epoxy and graphite/epoxy tapered laminates
NASA Technical Reports Server (NTRS)
Murri, Gretchen B.; Obrien, T. Kevin; Salpekar, Satish A.
1990-01-01
Symmetric tapered laminates with internally dropped plies were tested with two different layups and two materials, S2/SP250 glass/epoxy and IM6/1827I graphite/epoxy. The specimens were loaded in cyclic tension until they delaminated unstably. Each combination of material and layup had a unique failure mode. Calculated values of strain energy release rate, G, from a finite element analysis model of delamination along the taper, and for delamination from a matrix ply crack, were used with mode I fatigue characterization data from tests of the tested materials to calculate expected delamination onset loads. Calculated values were compared to the experimental results. The comparison showed that when the calculated G was chosen according to the observed delamination failures, the agreement between the calculated and measured delamination onset loads was reasonable for each combination of layup and material.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Highly ionized atoms in cooling gas. [in model for cooling of hot Galactic corona
NASA Technical Reports Server (NTRS)
Edgar, Richard J.; Chevalier, Roger A.
1986-01-01
The ionization of low density gas cooling from a high temperature was calculated. The evolution during the cooling is assumed to be isochoric, isobaric, or a combination of these cases. The calculations are used to predict the column densities and ultraviolet line luminosities of highly ionized atoms in cooling gas. In a model for cooling of a hot galactic corona, it is shown that the observed value of N(N V) can be produced in the cooling gas, while the predicted value of N(Si IV) falls short of the observed value by a factor of about 5. The same model predicts fluxes of ultraviolet emission lines that are a factor of 10 lower than the claimed detections of Feldman, Bruna, and Henry. Predictions are made for ultraviolet lines in cooling flows in early-type galaxies and clusters of galaxies. It is shown that the column densities of interest vary over a fairly narrow range, while the emission line luminosities are simply proportional to the mass inflow rate.
NASA Astrophysics Data System (ADS)
Kolyari I., G.
2018-05-01
The proposed theoretical model allows for the perfectly elastic collision of three bodies (three mass points) to calculate: 1) the definite value of the three bodies' projected velocities after the collision with a straight line, along which the bodies moved before the collision; 2) the definite value of the scattering bodies' velocities on the plane and the definite value of the angles between the bodies' momenta (or velocities), which the bodies obtain after the collision when moving on the plane. The proposed calculation model of the velocities of the three collided bodies is consistent with the dynamic model of the same bodies' interaction during the collision, taking into account that the energy flow is conserved for the entire system before and after the collision. It is shown that under the perfectly elastic interaction during the collision of three bodies the energy flow is conserved in addition to the momentum and energy conservation.
Energy gain calculations in Penning fusion systems using a bounce-averaged Fokker-Planck model
NASA Astrophysics Data System (ADS)
Chacón, L.; Miley, G. H.; Barnes, D. C.; Knoll, D. A.
2000-11-01
In spherical Penning fusion devices, a spherical cloud of electrons, confined in a Penning-like trap, creates the ion-confining electrostatic well. Fusion energy gains for these systems have been calculated in optimistic conditions (i.e., spherically uniform electrostatic well, no collisional ion-electron interactions, single ion species) using a bounce-averaged Fokker-Planck (BAFP) model. Results show that steady-state distributions in which the Maxwellian ion population is dominant correspond to lowest ion recirculation powers (and hence highest fusion energy gains). It is also shown that realistic parabolic-like wells result in better energy gains than square wells, particularly at large well depths (>100 kV). Operating regimes with fusion power to ion input power ratios (Q-value) >100 have been identified. The effect of electron losses on the Q-value has been addressed heuristically using a semianalytic model, indicating that large Q-values are still possible provided that electron particle losses are kept small and well depths are large.
Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A
2018-02-01
The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.
Validation of a Solid Rocket Motor Internal Environment Model
NASA Technical Reports Server (NTRS)
Martin, Heath T.
2017-01-01
In a prior effort, a thermal/fluid model of the interior of Penn State University's laboratory-scale Insulation Test Motor (ITM) was constructed to predict both the convective and radiative heat transfer to the interior walls of the ITM with a minimum of empiricism. These predictions were then compared to values of total and radiative heat flux measured in a previous series of ITM test firings to assess the capabilities and shortcomings of the chosen modeling approach. Though the calculated fluxes reasonably agreed with those measured during testing, this exercise revealed means of improving the fidelity of the model to, in the case of the thermal radiation, enable direct comparison of the measured and calculated fluxes and, for the total heat flux, compute a value indicative of the average measured condition. By replacing the P1-Approximation with the discrete ordinates (DO) model for the solution of the gray radiative transfer equation, the radiation intensity field in the optically thin region near the radiometer is accurately estimated, allowing the thermal radiation flux to be calculated on the heat-flux sensor itself, which was then compared directly to the measured values. Though the fully coupling the wall thermal response with the flow model was not attempted due to the excessive computational time required, a separate wall thermal response model was used to better estimate the average temperature of the graphite surfaces upstream of the heat flux gauges and improve the accuracy of both the total and radiative heat flux computations. The success of this modeling approach increases confidence in the ability of state-of-the-art thermal and fluid modeling to accurately predict SRM internal environments, offers corrections to older methods, and supplies a tool for further studies of the dynamics of SRM interiors.
A critical re-evaluation of the regression model specification in the US D1 EQ-5D value function
2012-01-01
Background The EQ-5D is a generic health-related quality of life instrument (five dimensions with three levels, 243 health states), used extensively in cost-utility/cost-effectiveness analyses. EQ-5D health states are assigned values on a scale anchored in perfect health (1) and death (0). The dominant procedure for defining values for EQ-5D health states involves regression modeling. These regression models have typically included a constant term, interpreted as the utility loss associated with any movement away from perfect health. The authors of the United States EQ-5D valuation study replaced this constant with a variable, D1, which corresponds to the number of impaired dimensions beyond the first. The aim of this study was to illustrate how the use of the D1 variable in place of a constant is problematic. Methods We compared the original D1 regression model with a mathematically equivalent model with a constant term. Comparisons included implications for the magnitude and statistical significance of the coefficients, multicollinearity (variance inflation factors, or VIFs), number of calculation steps needed to determine tariff values, and consequences for tariff interpretation. Results Using the D1 variable in place of a constant shifted all dummy variable coefficients away from zero by the value of the constant, greatly increased the multicollinearity of the model (maximum VIF of 113.2 vs. 21.2), and increased the mean number of calculation steps required to determine health state values. Discussion Using the D1 variable in place of a constant constitutes an unnecessary complication of the model, obscures the fact that at least two of the main effect dummy variables are statistically nonsignificant, and complicates and biases interpretation of the tariff algorithm. PMID:22244261
A critical re-evaluation of the regression model specification in the US D1 EQ-5D value function.
Rand-Hendriksen, Kim; Augestad, Liv A; Dahl, Fredrik A
2012-01-13
The EQ-5D is a generic health-related quality of life instrument (five dimensions with three levels, 243 health states), used extensively in cost-utility/cost-effectiveness analyses. EQ-5D health states are assigned values on a scale anchored in perfect health (1) and death (0).The dominant procedure for defining values for EQ-5D health states involves regression modeling. These regression models have typically included a constant term, interpreted as the utility loss associated with any movement away from perfect health. The authors of the United States EQ-5D valuation study replaced this constant with a variable, D1, which corresponds to the number of impaired dimensions beyond the first. The aim of this study was to illustrate how the use of the D1 variable in place of a constant is problematic. We compared the original D1 regression model with a mathematically equivalent model with a constant term. Comparisons included implications for the magnitude and statistical significance of the coefficients, multicollinearity (variance inflation factors, or VIFs), number of calculation steps needed to determine tariff values, and consequences for tariff interpretation. Using the D1 variable in place of a constant shifted all dummy variable coefficients away from zero by the value of the constant, greatly increased the multicollinearity of the model (maximum VIF of 113.2 vs. 21.2), and increased the mean number of calculation steps required to determine health state values. Using the D1 variable in place of a constant constitutes an unnecessary complication of the model, obscures the fact that at least two of the main effect dummy variables are statistically nonsignificant, and complicates and biases interpretation of the tariff algorithm.
Ghazikhanlou-Sani, K; Firoozabadi, S M P; Agha-Ghazvini, L; Mahmoodzadeh, H
2016-06-01
There is many ways to assessing the electrical conductivity anisotropy of a tumor. Applying the values of tissue electrical conductivity anisotropy is crucial in numerical modeling of the electric and thermal field distribution in electroporation treatments. This study aims to calculate the tissues electrical conductivity anisotropy in patients with sarcoma tumors using diffusion tensor imaging technique. A total of 3 subjects were involved in this study. All of patients had clinically apparent sarcoma tumors at the extremities. The T1, T2 and DTI images were performed using a 3-Tesla multi-coil, multi-channel MRI system. The fractional anisotropy (FA) maps were performed using the FSL (FMRI software library) software regarding the DTI images. The 3D matrix of the FA maps of each area (tumor, normal soft tissue and bone/s) was reconstructed and the anisotropy matrix was calculated regarding to the FA values. The mean FA values in direction of main axis in sarcoma tumors were ranged between 0.475-0.690. With assumption of isotropy of the electrical conductivity, the FA value of electrical conductivity at each X, Y and Z coordinate axes would be equal to 0.577. The gathered results showed that there is a mean error band of 20% in electrical conductivity, if the electrical conductivity anisotropy not concluded at the calculations. The comparison of FA values showed that there is a significant statistical difference between the mean FA value of tumor and normal soft tissues (P<0.05). DTI is a feasible technique for the assessment of electrical conductivity anisotropy of tissues. It is crucial to quantify the electrical conductivity anisotropy data of tissues for numerical modeling of electroporation treatments.
The calculation of the phase equilibrium of the multicomponent hydrocarbon systems
NASA Astrophysics Data System (ADS)
Molchanov, D. A.
2018-01-01
Hydrocarbon mixtures filtration process simulation development has resulted in use of cubic equations of state of the van der Waals type to describe the thermodynamic properties of natural fluids under real thermobaric conditions. Binary hydrocarbon systems allow to simulate the fluids of different types of reservoirs qualitatively, what makes it possible to carry out the experimental study of their filtration features. Exploitation of gas-condensate reservoirs shows the possibility of existence of various two-phase filtration regimes, including self-oscillatory one, which occurs under certain values of mixture composition, temperature and pressure drop. Plotting of the phase diagram of the model mixture is required to determine these values. A software package to calculate the vapor-liquid equilibrium of binary systems using cubic equation of state of the van der Waals type has been created. Phase diagrams of gas-condensate model mixtures have been calculated.
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2018-03-01
We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.
The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.
Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K
2018-05-01
In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.
Burst strength of tubing and casing based on twin shear unified strength theory.
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells.
Burst Strength of Tubing and Casing Based on Twin Shear Unified Strength Theory
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells. PMID:25397886
Enthalpies of Formation of Hydrazine and Its Derivatives.
Dorofeeva, Olga V; Ryzhova, Oxana N; Suchkova, Taisiya A
2017-07-20
Enthalpies of formation, Δ f H 298 ° , in both the gas and condensed phase, and enthalpies of sublimation or vaporization have been estimated for hydrazine, NH 2 NH 2 , and its 36 various derivatives using quantum chemical calculations. The composite G4 method has been used along with isodesmic reaction schemes to derive a set of self-consistent high-accuracy gas-phase enthalpies of formation. To estimate the enthalpies of sublimation and vaporization with reasonable accuracy (5-20 kJ/mol), the method of molecular electrostatic potential (MEP) has been used. The value of Δ f H 298 ° (NH 2 NH 2 ,g) = 97.0 ± 3.0 kJ/mol was determined from 75 isogyric reactions involving about 50 reference species; for most of these species, the accurate Δ f H 298 ° (g) values are available in Active Thermochemical Tables (ATcT). The calculated value is in excellent agreement with the reported results of the most accurate models based on coupled cluster theory (97.3 kJ/mol, the average of six calculations). Thus, the difference between the values predicted by high-level theoretical calculations and the experimental value of Δ f H 298 ° (NH 2 NH 2 ,g) = 95.55 ± 0.19 kJ/mol recommended in the ATcT and other comprehensive reference sources is sufficiently large and requires further investigation. Different hydrazine derivatives have been also considered in this work. For some of them, both the enthalpy of formation in the condensed phase and the enthalpy of sublimation or vaporization are available; for other compounds, experimental data for only one of these properties exist. Evidence of accuracy of experimental data for the first group of compounds was provided by the agreement with theoretical Δ f H 298 ° (g) value. The unknown property for the second group of compounds was predicted using the MEP model. This paper presents a systematic comparison of experimentally determined enthalpies of formation and enthalpies of sublimation or vaporization with the results of calculations. Because of relatively large uncertainty in the estimated enthalpies of sublimation, it was not always possible to evaluate the accuracy of the experimental values; however, this model allowed us to detect large errors in the experimental data, as in the case of 5,5'-hydrazinebistetrazole. The enthalpies of formation and enthalpies of sublimation or vaporization have been predicted for the first time for ten hydrazine derivatives with no experimental data. A recommended set of self-consistent experimental and calculated gas-phase enthalpies of formation of hydrazine derivatives can be used as reference Δ f H 298 ° (g) values to predict the enthalpies of formation of various hydrazines by means of isodesmic reactions.
NASA Astrophysics Data System (ADS)
Shah, Amish P.
The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.
Kinematic analysis of crank -cam mechanism of process equipment
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu
2018-03-01
This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.
A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry
NASA Astrophysics Data System (ADS)
Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George
2012-07-01
Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
Biermans, Geert; Horemans, Nele; Vanhoudt, Nathalie; Vandenhove, Hildegarde; Saenen, Eline; Van Hees, May; Wannijn, Jean; Vives i Batlle, Jordi; Cuypers, Ann
2014-07-01
There is a need for a better understanding of biological effects of radiation exposure in non-human biota. Correct description of these effects requires a more detailed model of dosimetry than that available in current risk assessment tools, particularly for plants. In this paper, we propose a simple model for dose calculations in roots and shoots of Arabidopsis thaliana seedlings exposed to radionuclides in a hydroponic exposure setup. This model is used to compare absorbed doses for three radionuclides, (241)Am (α-radiation), (90)Sr (β-radiation) and (133)Ba (γ radiation). Using established dosimetric calculation methods, dose conversion coefficient values were determined for each organ separately based on uptake data from the different plant organs. These calculations were then compared to the DCC values obtained with the ERICA tool under equivalent geometry assumptions. When comparing with our new method, the ERICA tool appears to overestimate internal doses and underestimate external doses in the roots for all three radionuclides, though each to a different extent. These observations might help to refine dose-response relationships. The DCC values for (90)Sr in roots are shown to deviate the most. A dose-effect curve for (90)Sr β-radiation has been established on biomass and photosynthesis endpoints, but no significant dose-dependent effects are observed. This indicates the need for use of endpoints at the molecular and physiological scale. Copyright © 2013 Elsevier Ltd. All rights reserved.
Calculated values of atomic oxygen fluences and solar exposure on selected surfaces of LDEF
NASA Technical Reports Server (NTRS)
Gillis, J. R.; Pippin, H. G.; Bourassa, R. J.; Gruenbaum, P. E.
1995-01-01
Atomic oxygen (AO) fluences and solar exposure have been modeled for selected hardware from the Long Duration Exposure Facility (LDEF). The atomic oxygen exposure was modeled using the microenvironment modeling code SHADOWV2. The solar exposure was modeled using the microenvironment modeling code SOLSHAD version 1.0.
Miller, Robert T.; Delin, G.N.
1994-01-01
A three-dimensional, anisotropic, nonisothermal, ground-water-flow, and thermal-energy-transport model was constructed to simulate the four short-term test cycles. The model was used to simulate the entire short-term testing period of approximately 400 days. The only model properties varied during model calibration were longitudinal and transverse thermal dispersivities, which, for final calibration, were simulated as 3.3 and 0.33 meters, respectively. The model was calibrated by comparing model-computed results to (1) measured temperatures at selected altitudes in four observation wells, (2) measured temperatures at the production well, and (3) calculated thermal efficiencies of the aquifer. Model-computed withdrawal-water temperatures were within an average of about 3 percent of measured values and model-computed aquifer-thermal efficiencies were within an average of about 5 percent of calculated values for the short-term test cycles. These data indicate that the model accurately simulated thermal-energy storage within the Franconia-Ironton-Galesville aquifer.
A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique
Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie
2015-01-01
Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
de Salles, Alvaro A; Bulla, Giovani; Rodriguez, Claudio E Fernández
2006-01-01
The Specific Absorption Rate (SAR) produced by mobile phones in the head of adults and children is simulated using an algorithm based on the Finite Difference Time Domain (FDTD) method. Realistic models of the child and adult head are used. The electromagnetic parameters are fitted to these models. Comparison also are made with the SAR calculated in the children model when using adult human electromagnetic parameters values. Microstrip (or patch) antennas and quarter wavelength monopole antennas are used in the simulations. The frequencies used to feed the antennas are 1850 MHz and 850 MHz. The SAR results are compared with the available international recommendations. It is shown that under similar conditions, the 1g-SAR calculated for children is higher than that for the adults. When using the 10-year old child model, SAR values higher than 60% than those for adults are obtained.
Ionosphere Profile Estimation Using Ionosonde & GPS Data in an Inverse Refraction Calculation
NASA Astrophysics Data System (ADS)
Psiaki, M. L.
2014-12-01
A method has been developed to assimilate ionosonde virtual heights and GPS slant TEC data to estimate the parameters of a local ionosphere model, including estimates of the topside and of latitude and longitude variations. This effort seeks to better assimilate a variety of remote sensing data in order to characterize local (and eventually regional and global) ionosphere electron density profiles. The core calculations involve a forward refractive ray-tracing solution and a nonlinear optimal estimation algorithm that inverts the forward model. The ray-tracing calculations solve a nonlinear two-point boundary value problem for the curved ionosonde or GPS ray path through a parameterized electron density profile. It implements a full 3D solution that can handle the case of a tilted ionosphere. These calculations use Hamiltonian equivalents of the Appleton-Hartree magneto-plasma refraction index model. The current ionosphere parameterization is a modified Booker profile. It has been augmented to include latitude and longitude dependencies. The forward ray-tracing solution yields a given signal's group delay and beat carrier phase observables. An auxiliary set of boundary value problem solutions determine the sensitivities of the ray paths and observables with respect to the parameters of the augmented Booker profile. The nonlinear estimation algorithm compares the measured ionosonde virtual-altitude observables and GPS slant-TEC observables to the corresponding values from the forward refraction model. It uses the parameter sensitivities of the model to iteratively improve its parameter estimates in a way the reduces the residual errors between the measurements and their modeled values. This method has been applied to data from HAARP in Gakona, AK and has produced good TEC and virtual height fits. It has been extended to characterize electron density perturbations caused by HAARP heating experiments through the use of GPS slant TEC data for an LOS through the heated zone. The next planned extension of the method is to estimate the parameters of a regional ionosphere profile. The input observables will be slant TEC from an array of GPS receivers and group delay and carrier phase observables from an array of high-frequency beacons. The beacon array will function as a sort of multi-static ionosonde.
Bayesian Regression of Thermodynamic Models of Redox Active Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Katherine
Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from themore » model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).« less
NASA Astrophysics Data System (ADS)
Morozov, A. N.
2017-11-01
The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.
Dynamic performance of a suspended reinforced concrete footbridge under pedestrian movements
NASA Astrophysics Data System (ADS)
Drygala, I.; Dulinska, J.; Kondrat, K.
2018-02-01
In the paper the dynamic analysis of a suspended reinforced concrete footbridge over a national road located in South Poland was carried out. Firstly, modes and values of natural frequencies of vibration of the structure were calculated. The results of the numerical modal investigation shown that the natural frequencies of the structure coincided with the frequency of human beings during motion steps (walking fast or running). Hence, to consider the comfort standards, the dynamic response of the footbridge to a runner dynamic motion should be calculated. Secondly, the dynamic response of the footbridge was calculated taking into consideration two models of dynamic forces produced by a single running pedestrian: a ‘sine’ and ‘half-sine’ model. It occurred that the values of accelerations and displacements obtained for the ‘half-sine’ model of dynamic forces were greater than those obtained for the ‘sine’ model up 20%. The ‘sine’ model is appropriate only for walking users of the walkways, because the nature of their motion has continues characteristic. In the case of running users of walkways this theory is unfitting, since the forces produced by a running pedestrian has a discontinuous nature. In this scenario of calculations, a ‘half-sine’ model seemed to be more effective. Finally, the comfort conditions for the footbridge were evaluated. The analysis proved that the vertical comfort criteria were not exceeded for a single user of footbridge running or walking fast.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
Synthetic Seismogram Calculations for Two-Dimensional Velocity Models.
1983-05-20
vertical and radial component displacements. The seismograms have been convolved with a seismograph response function corresponding to a short period...phase velocity is a measure of the degree of numerical dispersion present in the calculation for a variety of grid spacings. The value of 1/G of 0.1...method is an approximate technique and is some what restricted in its application, its efficiency and accuracy make it suitable for routine modeling of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erbrink, H.J.; Scholten, D.A.
1995-10-01
Atmospheric turbulence intensities and timescales have been measured for one year and modeled in a shoreline environment. Measurements were carried out at two sites, on both sides of the shoreline, about 10 km from the beach. The frequency distribution of Pasquill stability classes [determined using the COPS (calculation of Pasquill stability) method by analyzing wind fluctuations] was compared with similar observations at the land site. A shift to the more stable classes is observed. Moreover, a large shift to the stable classes was shown at the overwater site for the COPS method in comparison to the Hanna et al. andmore » Hsu method, which is based upon z/L values, originating from sea-air temperature differences. The observed values of the lateral wind fluctuations {sigma}{sub {nu}} and stability classes for the dataset could not be adequately described in terms of local parameters such as Obukhov length L and friction velocity u. Therefore, a simple model, which explicitly considered advection and dissipation in the turbulent kinetic energy equation, was formulated for the calculation of turbulence intensity at sea for offshore wind flows; dissipation is assumed to scale well with {sigma}{sub {nu}}{sup 3}/uT{sub {ell}}. Using this model, observed {sigma}{sub {nu}} values were explained remarkably well. It is concluded that atmospheric turbulence intensity above sea in the vicinity of the shoreline is strongly influenced by horizontal gradients and cannot be described successfully only in terms of local parameterization. The calculated values of the Eulerian timescale correlate quite well with measured values of {sigma}{sub {nu}}/u at both sites. However, values of the timescale at sea were larger; this may be caused by differences in roughness lengths between land and sea. 43 refs., 11 figs., 3 tabs.« less
Bozkurt, Hayriye; D'Souza, Doris H; Davidson, P Michael
2015-07-01
Human noroviruses (HNoV) and hepatitis A virus (HAV) have been implicated in outbreaks linked to the consumption of presliced ready-to-eat deli meats. The objectives of this research were to determine the thermal inactivation kinetics of HNoV surrogates (murine norovirus 1 [MNV-1] and feline calicivirus strain F9 [FCV-F9]) and HAV in turkey deli meat, compare first-order and Weibull models to describe the data, and calculate Arrhenius activation energy values for each model. The D (decimal reduction time) values in the temperature range of 50 to 72°C calculated from the first-order model were 0.1 ± 0.0 to 9.9 ± 3.9 min for FCV-F9, 0.2 ± 0.0 to 21.0 ± 0.8 min for MNV-1, and 1.0 ± 0.1 to 42.0 ± 5.6 min for HAV. Using the Weibull model, the tD = 1 (time to destroy 1 log) values for FCV-F9, MNV-1, and HAV at the same temperatures ranged from 0.1 ± 0.0 to 11.9 ± 5.1 min, from 0.3 ± 0.1 to 17.8 ± 1.8 min, and from 0.6 ± 0.3 to 25.9 ± 3.7 min, respectively. The z (thermal resistance) values for FCV-F9, MNV-1, and HAV were 11.3 ± 2.1°C, 11.0 ± 1.6°C, and 13.4 ± 2.6°C, respectively, using the Weibull model. The z values using the first-order model were 11.9 ± 1.0°C, 10.9 ± 1.3°C, and 12.8 ± 1.7°C for FCV-F9, MNV-1, and HAV, respectively. For the Weibull model, estimated activation energies for FCV-F9, MNV-1, and HAV were 214 ± 28, 242 ± 36, and 154 ± 19 kJ/mole, respectively, while the calculated activation energies for the first-order model were 181 ± 16, 196 ± 5, and 167 ± 9 kJ/mole, respectively. Precise information on the thermal inactivation of HNoV surrogates and HAV in turkey deli meat was generated. This provided calculations of parameters for more-reliable thermal processes to inactivate viruses in contaminated presliced ready-to-eat deli meats and thus to reduce the risk of foodborne illness outbreaks. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Bozkurt, Hayriye; Davidson, P. Michael
2015-01-01
Human noroviruses (HNoV) and hepatitis A virus (HAV) have been implicated in outbreaks linked to the consumption of presliced ready-to-eat deli meats. The objectives of this research were to determine the thermal inactivation kinetics of HNoV surrogates (murine norovirus 1 [MNV-1] and feline calicivirus strain F9 [FCV-F9]) and HAV in turkey deli meat, compare first-order and Weibull models to describe the data, and calculate Arrhenius activation energy values for each model. The D (decimal reduction time) values in the temperature range of 50 to 72°C calculated from the first-order model were 0.1 ± 0.0 to 9.9 ± 3.9 min for FCV-F9, 0.2 ± 0.0 to 21.0 ± 0.8 min for MNV-1, and 1.0 ± 0.1 to 42.0 ± 5.6 min for HAV. Using the Weibull model, the tD = 1 (time to destroy 1 log) values for FCV-F9, MNV-1, and HAV at the same temperatures ranged from 0.1 ± 0.0 to 11.9 ± 5.1 min, from 0.3 ± 0.1 to 17.8 ± 1.8 min, and from 0.6 ± 0.3 to 25.9 ± 3.7 min, respectively. The z (thermal resistance) values for FCV-F9, MNV-1, and HAV were 11.3 ± 2.1°C, 11.0 ± 1.6°C, and 13.4 ± 2.6°C, respectively, using the Weibull model. The z values using the first-order model were 11.9 ± 1.0°C, 10.9 ± 1.3°C, and 12.8 ± 1.7°C for FCV-F9, MNV-1, and HAV, respectively. For the Weibull model, estimated activation energies for FCV-F9, MNV-1, and HAV were 214 ± 28, 242 ± 36, and 154 ± 19 kJ/mole, respectively, while the calculated activation energies for the first-order model were 181 ± 16, 196 ± 5, and 167 ± 9 kJ/mole, respectively. Precise information on the thermal inactivation of HNoV surrogates and HAV in turkey deli meat was generated. This provided calculations of parameters for more-reliable thermal processes to inactivate viruses in contaminated presliced ready-to-eat deli meats and thus to reduce the risk of foodborne illness outbreaks. PMID:25956775
Monte Carlo calculations of k{sub Q}, the beam quality conversion factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B. R.; Rogers, D. W. O.
2010-11-15
Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k{sub Q}, for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k{sub Q}. These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs{sub c}hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to amore » small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k{sub Q} is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k{sub Q} factors as a function of beam quality expressed as %dd(10){sub x} and TPR{sub 10}{sup 20} are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k{sub Q} values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k{sub Q} directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more accurate than previously used values. For small ionization chambers with central electrodes composed of high-Z materials, the effect of the central electrode is much larger than that for the aluminum electrodes in Farmer chambers.« less
Hart, F X
1990-01-01
The current-density distribution produced inside irregularly shaped, homogeneous human and rat models by low-frequency electric fields is obtained by a two-stage finite-difference procedure. In the first stage the model is assumed to be equipotential. Laplace's equation is solved by iteration in the external region to obtain the capacitive-current densities at the model's surface elements. These values then provide the boundary conditions for the second-stage relaxation solution, which yields the internal current-density distribution. Calculations were performed with the Excel spread-sheet program on a Macintosh-II microcomputer. A spread sheet is a two-dimensional array of cells. Each cell of the sheet can represent a square element of space. Equations relating the values of the cells can represent the relationships between the potentials in the corresponding spatial elements. Extension to three dimensions is readily made. Good agreement was obtained with current densities measured on human models with both, one, or no legs grounded and on rat models in four different grounding configurations. The results also compared well with predictions of more sophisticated numerical analyses. Spread sheets can provide an inexpensive and relatively simple means to perform good, approximate dosimetric calculations on irregularly shaped objects.
Laenen, Antonius; Hansen, R.P.
1988-01-01
A one-dimensional, unsteady-state, open-channel model was used to analytically reproduce three lahar events. Factors contributing to the success of the modeling were: (1) the lahars were confined to a channel, (2) channel roughness was defined by field information, and (3) the volume of the flow remained relatively unchanged for the duration of the peak. Manning 's 'n ' values used in computing conveyance in the model were subject to the changing rheology of the debris flow and were calculated from field cross-section information (velocities used in these calculations were derived from super-elevation or run-up formulas). For the events modeled in this exercise, Manning 's 'n ' calculations ranged from 0.020 to 0.099. In all lahar simulations, the rheology of the flow changed in a downstream direction during the course of the event. Chen 's 'U ', the mudflow consistency index, changed approximately an order of magnitude for each event. The ' u ' values ranged from 5-2,260 kg/m for three events modeled. The empirical approach adopted in this paper is useful as a tool to help predict debris-flow behavior, but does not lead to understanding the physical processes of debris flows. (Author 's abstract)
A Permeability Study of O2 and the Trace Amine p-Tyramine through Model Phosphatidylcholine Bilayers
Holland, Bryan W.; Berry, Mark D.; Gray, C. G.; Tomberli, Bruno
2015-01-01
We study here the permeability of the hydrophobic O2 molecule through a model DPPC bilayer at 323K and 350K, and of the trace amine p-tyramine through PC bilayers at 310K. The tyramine results are compared to previous experimental work at 298K. Nonequilibrium work methods were used in conjunction to simultaneously obtain both the potential of mean force (PMF) and the position dependent transmembrane diffusion coefficient, D(z), from the simulations. These in turn were used to calculate the permeability coefficient, P, through the inhomogeneous solubility-diffusion model. The results for O2 are consistent with previous simulations, and agree with experimentally measured P values for PC bilayers. A temperature dependence in the permeability of O2 through DPPC was obtained, with P decreasing at higher temperatures. Two relevant species of p-tyramine were simulated, from which the PMF and D(z) were calculated. The charged species had a large energetic barrier to crossing the bilayer of ~ 21 kcal/mol, while the uncharged, deprotonated species had a much lower barrier of ~ 7 kcal/mol. The effective in silico permeability for p-tyramine was calculated by applying three approximations, all of which gave nearly identical results (presented here as a function of the pKa). As the permeability value calculated from simulation was highly dependent on the pKa of the amine group, a further pKa study was performed that also varied the fraction of the uncharged and zwitterionic p-tyramine species. Using the experimental P value together with the simulated results, we were able to label the phenolic group as responsible for the pKa1 and the amine for the pKa2, that together represent all of the experimentally measured pKa values for p-tyramine. This agrees with older experimental results, in contrast to more recent work that has suggested there is a strong ambiguity in the pKa values. PMID:26086933
[Colorimetric characterization of LCD based on wavelength partition spectral model].
Liu, Hao-Xue; Cui, Gui-Hua; Huang, Min; Wu, Bing; Xu, Yan-Fang; Luo, Ming
2013-10-01
To establish a colorimetrical characterization model of LCDs, an experiment with EIZO CG19, IBM 19, DELL 19 and HP 19 LCDs was designed and carried out to test the interaction between RGB channels, and then to test the spectral additive property of LCDs. The RGB digital values of single channel and two channels were given and the corresponding tristimulus values were measured, then a chart was plotted and calculations were made to test the independency of RGB channels. The results showed that the interaction between channels was reasonably weak and spectral additivity property was held well. We also found that the relations between radiations and digital values at different wavelengths varied, that is, they were the functions of wavelength. A new calculation method based on piecewise spectral model, in which the relation between radiations and digital values was fitted by a cubic polynomial in each piece of wavelength with measured spectral radiation curves, was proposed and tested. The spectral radiation curves of RGB primaries with any digital values can be found out with only a few measurements and fitted cubic polynomial in this way and then any displayed color can be turned out by the spectral additivity property of primaries at given digital values. The algorithm of this method was discussed in detail in this paper. The computations showed that the proposed method was simple and the number of measurements needed was reduced greatly while keeping a very high computation precision. This method can be used as a colorimetrical characterization model.
Ferrero, L; Mocnik, G; Ferrini, B S; Perrone, M G; Sangiorgi, G; Bolzacchini, E
2011-06-15
Vertical profiles of aerosol number-size distribution and black carbon (BC) concentration were measured between ground-level and 500m AGL over Milan. A tethered balloon was fitted with an instrumentation package consisting of the newly-developed micro-Aethalometer (microAeth® Model AE51, Magee Scientific, USA), an optical particle counter, and a portable meteorological station. At the same time, PM(2.5) samples were collected both at ground-level and at a high altitude sampling site, enabling particle chemical composition to be determined. Vertical profiles and PM(2.5) data were collected both within and above the mixing layer. Absorption coefficient (b(abs)) profiles were calculated from the Aethalometer data: in order to do so, an optical enhancement factor (C), accounting for multiple light-scattering within the filter of the new microAeth® Model AE51, was determined for the first time. The value of this parameter C (2.05±0.03 at λ=880nm) was calculated by comparing the Aethalometer attenuation coefficient and aerosol optical properties determined from OPC data along vertical profiles. Mie calculations were applied to the OPC number-size distribution data, and the aerosol refractive index was calculated using the effective medium approximation applied to aerosol chemical composition. The results compare well with AERONET data. The BC and b(abs) profiles showed a sharp decrease at the mixing height (MH), and fairly constant values of b(abs) and BC were found above the MH, representing 17±2% of those values measured within the mixing layer. The BC fraction of aerosol volume was found to be lower above the MH: 48±8% of the corresponding ground-level values. A statistical mean profile was calculated, both for BC and b(abs), to better describe their behaviour; the model enabled us to compute their average behaviour as a function of height, thus laying the foundations for valid parametrizations of vertical profile data which can be useful in both remote sensing and climatic studies. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Aqra, Fathi; Ayyad, Ahmed
2011-09-01
An improved theoretical method for calculating the surface tension of liquid metals is proposed. A recently derived equation that allows an accurate estimate of surface tension to be made for the large number of elements, based on statistical thermodynamics, is used for a means of calculating reliable values for the surface tension of pure liquid alkali, alkaline earth, and main group metals at the melting point, In order to increase the validity of the model, the surface tension of liquid lithium was calculated in the temperature range 454 K to 1300 K (181 °C to 1027 °C), where the calculated surface tension values follow a straight line behavior given by γ = 441 - 0.15 (T-Tm) (mJ m-2). The calculated surface excess entropy of liquid Li (- dγ/ dT) was found to be 0.15 mJ m-2 K-1, which agrees well with the reported experimental value (0.147 mJ/m2 K). Moreover, the relations of the calculated surface tension of alkali metals to atomic radius, heat of fusion, and specific heat capacity are described. The results are in excellent agreement with the existing experimental data.
Temsch, W; Luger, A; Riedl, M
2008-01-01
This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.
Predicting Microstructure and Microsegregation in Multicomponent Aluminum Alloys
NASA Astrophysics Data System (ADS)
Yan, Xinyan; Ding, Ling; Chen, ShuangLin; Xie, Fanyou; Chu, M.; Chang, Y. Austin
Accurate predictions of microstructure and microsegregation in metallic alloys are highly important for applications such as alloy design and process optimization. Restricted assumptions concerning the phase diagram could easily lead to erroneous predictions. The best approach is to couple microsegregation modeling with phase diagram computations. A newly developed numerical model for the prediction of microstructure and microsegregation in multicomponent alloys during dendritic solidification was introduced. The micromodel is directly coupled with phase diagram calculations using a user-friendly and robust phase diagram calculation engine-PANDAT. Solid state back diffusion, undercooling and coarsening effects are included in this model, and the experimentally measured cooling curves are used as the inputs to carry out the calculations. This model has been used to predict the microstructure and microsegregation in two multicomponent aluminum alloys, 2219 and 7050. The calculated values were confirmed using results obtained from directional solidification.
Magnetic properties of single crystal alpha-benzoin oxime: An EPR study
NASA Astrophysics Data System (ADS)
Sayin, Ulku; Dereli, Ömer; Türkkan, Ercan; Ozmen, Ayhan
2012-02-01
The electron paramagnetic resonance (EPR) spectra of gamma irradiated single crystals of alpha-benzoinoxime (ABO) have been examined between 120 and 440 K. Considering the dependence on temperature and the orientation of the spectra of single crystals in the magnetic field, we identified two different radicals formed in irradiated ABO single crystals. To theoretically determine the types of radicals, the most stable structure of ABO was obtained by molecular mechanic and B3LYP/6-31G(d,p) calculations. Four possible radicals were modeled and EPR parameters were calculated for the modeled radicals using the B3LYP method and the TZVP basis set. Calculated values of two modeled radicals were in strong agreement with experimental EPR parameters determined from the spectra. Additional simulated spectra of the modeled radicals, where calculated hyperfine coupling constants were used as starting points for simulations, were well matched with experimental spectra.
NASA Technical Reports Server (NTRS)
Jenkins, J. M.
1979-01-01
Additional information was added to a growing data base from which estimates of finite element model complexities can be made with respect to thermal stress analysis. The manner in which temperatures were smeared to the finite element grid points was examined from the point of view of the impact on thermal stress calculations. The general comparison of calculated and measured thermal stresses is guite good and there is little doubt that the finite element approach provided by NASTRAN results in correct thermal stress calculations. Discrepancies did exist between measured and calculated values in the skin and the skin/frame junctures. The problems with predicting skin thermal stress were attributed to inadequate temperature inputs to the structural model rather than modeling insufficiencies. The discrepancies occurring at the skin/frame juncture were most likely due to insufficient modeling elements rather than temperature problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, M., E-mail: manuel.rodriguez@rmp.uhn.ca; Rogers, D. W. O.
Purpose: To more accurately account for the relative intrinsic energy dependence and relative absorbed-dose energy dependence of TLDs when used to measure dose rate constants (DRCs) for {sup 125}I and {sup 103}Pd brachytherapy seeds, to thereby establish revised “measured values” for all seeds and compare the revised values with Monte Carlo and consensus values. Methods: The relative absorbed-dose energy dependence, f{sup rel}, for TLDs and the phantom correction, P{sub phant}, are calculated for {sup 125}I and {sup 103}Pd seeds using the EGSnrc BrachyDose and DOSXYZnrc codes. The original energy dependence and phantom corrections applied to DRC measurements are replaced bymore » calculated (f{sup rel}){sup −1} and P{sub phant} values for 24 different seed models. By comparing the modified measured DRCs to the MC values, an appropriate relative intrinsic energy dependence, k{sub bq}{sup rel}, is determined. The new P{sub phant} values and relative absorbed-dose sensitivities, S{sub AD}{sup rel}, calculated as the product of (f{sup rel}){sup −1} and (k{sub bq}{sup rel}){sup −1}, are used to individually revise the measured DRCs for comparison with Monte Carlo calculated values and TG-43U1 or TG-43U1S1 consensus values. Results: In general, f{sup rel} is sensitive to the energy spectra and models of the brachytherapy seeds. Values may vary up to 8.4% among {sup 125}I and {sup 103}Pd seed models and common TLD shapes. P{sub phant} values depend primarily on the isotope used. Deduced (k{sub bq}{sup rel}){sup −1} values are 1.074 ± 0.015 and 1.084 ± 0.026 for {sup 125}I and {sup 103}Pd seeds, respectively. For (1 mm){sup 3} chips, this implies an overall absorbed-dose sensitivity relative to {sup 60}Co or 6 MV calibrations of 1.51 ± 1% and 1.47 ± 2% for {sup 125}I and {sup 103}Pd seeds, respectively, as opposed to the widely used value of 1.41. Values of P{sub phant} calculated here have much lower statistical uncertainties than literature values, but systematic uncertainties from density and composition uncertainties are significant. Using these revised values with the literature’s DRC measurements, the average discrepancies between revised measured values and Monte Carlo values are 1.2% and 0.2% for {sup 125}I and {sup 103}Pd seeds, respectively, compared to average discrepancies for the original measured values of 4.8%. On average, the revised measured values are 4.3% and 5.9% lower than the original measured values for {sup 103}Pd and {sup 125}I seeds, respectively. The average of revised DRCs and Monte Carlo values is 3.8% and 2.8% lower for {sup 125}I and {sup 103}Pd seeds, respectively, than the consensus values in TG-43U1 or TG-43U1S1. Conclusions: This work shows that f{sup rel} is TLD shape and seed model dependent suggesting a need to update the generalized energy response dependence, i.e., relative absorbed-dose sensitivity, measured 25 years ago and applied often to DRC measurements of {sup 125}I and {sup 103}Pd brachytherapy seeds. The intrinsic energy dependence for LiF TLDs deduced here is consistent with previous dosimetry studies and emphasizes the need to revise the DRC consensus values reported by TG-43U1 or TG-43U1S1.« less
Organic Model of Interstellar Grains
NASA Astrophysics Data System (ADS)
Yabushita, S.; Inagaki, T.; Kawabe, T.; Wada, K.
1987-04-01
Extinction efficiency of grains is calculated from the Mie formula on the premise that the grains are of organic composition. The optical constants adopted for the calculations are those of E. coli, polystyrene and bovine albumin. The grain radius a is assumed to obey a distribution of the form N(a) ∝ a-α and the value of α is chosen so as to make the calculated extinction curve match the observed interstellar extinction curve. Although the calculated curve gives a reasonably good fit to the observed extinction curve for wavelengths less than 2100 Å, at longer wavelength regions, agreement is poor. It is concluded that another component is required for the organic model to be viable.
A figure of merit for AMTEC electrodes
NASA Technical Reports Server (NTRS)
Underwood, M. L.; Williams, R. M.; Jeffries-Nakamura, B.; Ryan, M. A.
1991-01-01
As a method to compare the results of alkali metal thermoelectric converter (AMTEC) electrode performance measured under different conditions, an AMTEC figure of merit called ZA is proposed. This figure of merit is the ratio of the experimental maximum power for an electrode to a calculated maximum power density as determined from a recently published electrode performance model. The calculation of a maximum power density assumes that certain loss terms in the electrode can be reduced to essentially zero by improved cell design and construction, and that the electrochemical exchange current is determined from a standard value. Other losses in the electrode are considered inherent to the electrode performance. Thus, these terms remain in the determination of the calculated maximum power. A value of ZA near one, then, indicates an electrode performance near the maximum possible performance. The primary limitation of this calculation is that the small electrode effect cannot be included. This effect leads to anomalously high values of ZA. Thus, the electrode area should be reported along with the figure of merit.
Vortex Rings Generated by a Shrouded Hartmann-Sprenger Tube
NASA Technical Reports Server (NTRS)
DeLoof, Richard L. (Technical Monitor); Wilson, Jack
2005-01-01
The pulsed flow emitted from a shrouded Hartmann-Sprenger tube was sampled with high-frequency pressure transducers and with laser particle imaging velocimetry, and found to consist of a train of vortices. Thrust and mass flow were also monitored using a thrust plate and orifice, respectively. The tube and shroud lengths were altered to give four different operating frequencies. From the data, the radius, velocity, and circulation of the vortex rings was obtained. Each frequency corresponded to a different length to diameter ratio of the pulse of air leaving the driver shroud. Two of the frequencies had length to diameter ratios below the formation number, and two above. The formation number is the value of length to diameter ratio below which the pulse converts to a vortex ring only, and above which the pulse becomes a vortex ring plus a trailing jet. A modified version of the slug model of vortex ring formation was used to compare the observations with calculated values. Because the flow exit area is an annulus, vorticity is shed at both the inner and outer edge of the jet. This results in a reduced circulation compared with the value calculated from slug theory accounting only for the outer edge. If the value of circulation obtained from laser particle imaging velocimetry is used in the slug model calculation of vortex ring velocity, the agreement is quite good. The vortex ring radius, which does not depend on the circulation, agrees well with predictions from the slug model.
NASA Astrophysics Data System (ADS)
Miyake, Shugo; Matsui, Genzou; Ohta, Hiromichi; Hatori, Kimihito; Taguchi, Kohei; Yamamoto, Suguru
2017-07-01
Thermal microscopes are a useful technology to investigate the spatial distribution of the thermal transport properties of various materials. However, for high thermal effusivity materials, the estimated values of thermophysical parameters based on the conventional 1D heat flow model are known to be higher than the values of materials in the literature. Here, we present a new procedure to solve the problem which calculates the theoretical temperature response with the 3D heat flow and measures reference materials which involve known values of thermal effusivity and heat capacity. In general, a complicated numerical iterative method and many thermophysical parameters are required for the calculation in the 3D heat flow model. Here, we devised a simple procedure by using a molybdenum (Mo) thin film with low thermal conductivity on the sample surface, enabling us to measure over a wide thermal effusivity range for various materials.
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
2007-03-01
column experiments were used to obtain model parameters . Cost data used in the model were based on conventional GAC installations, as modified to...43 Calculation of Parameters ...66 Determination of Parameter Values
NASA Technical Reports Server (NTRS)
Maples, A. L.
1981-01-01
The operation of solidification Model 2 is described and documentation of the software associated with the model is provided. Model 2 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of unsteady horizontal axisymmetric bidirectional solidification. The solidification program allows interactive modification of calculation parameters as well as selection of graphical and tabular output. In batch mode, parameter values are input in card image form and output consists of printed tables of solidification functions. The operational aspects of Model 2 that differ substantially from Model 1 are described. The global flow diagrams and data structures of Model 2 are included. The primary program documentation is the code itself.
Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R
2016-04-01
The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Model of the final borehole geometry for helical laser drilling
NASA Astrophysics Data System (ADS)
Kroschel, Alexander; Michalowski, Andreas; Graf, Thomas
2018-05-01
A model for predicting the borehole geometry for laser drilling is presented based on the calculation of a surface of constant absorbed fluence. It is applicable to helical drilling of through-holes with ultrashort laser pulses. The threshold fluence describing the borehole surface is fitted for best agreement with experimental data in the form of cross-sections of through-holes of different shapes and sizes in stainless steel samples. The fitted value is similar to ablation threshold fluence values reported for laser ablation models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borovsky, J.E.
1998-05-01
In this report, several lightning-channel parameters are calculated with the aid of an electrodynamic model of lightning. The electrodynamic model describes dart leaders and return strokes as electromagnetic waves that are guided along conducting lightning channels. According to the model, electrostatic energy is delivered to the channel by a leader, where it is stored around the outside of the channel; subsequently, the return stroke dissipates this locally stored energy. In this report this lightning-energy-flow scenario is developed further. Then the energy dissipated per unit length in lightning channels is calculated, where this quantity is now related to the linear chargemore » density on the channel, not to the cloud-to-ground electrostatic potential difference. Energy conservation is then used to calculate the radii of lightning channels: their initial radii at the onset of return strokes and their final radii after the channels have pressure expanded. Finally, the risetimes for channel heating during return strokes are calculated by defining an energy-storage radius around the channel and by estimating the radial velocity of energy flow toward the channel during a return stroke. In three appendices, values for the linear charge densities on lightning channels are calculated, estimates of the total length of branch channels are obtained, and values for the cloud-to-ground electrostatic potential difference are estimated. {copyright} 1998 American Geophysical Union« less
Distributed activation energy model parameters of some Turkish coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunes, M.; Gunes, S.K.
2008-07-01
A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
How much complexity is warranted in a rainfall-runoff model?
A.J. Jakeman; G.M. Hornberger
1993-01-01
Development of mathmatical models relating the precipitation incident upon a catchment to the streamflow emanating from the catchment has been a major focus af surface water hydrology for decades. Generally, values for parameters in such models must be selected so that runoff calculated from the model "matches" recorded runoff from some historical period....
77 FR 74421 - Approval and Promulgation of Air Quality Implementation Plans for PM2.5
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... calculation of future year PM 2.5 design values using the SMAT assumptions contained in the modeled guidance\\4... components. Future PM 2.5 design values at specified monitoring sites were estimated by adding the future... nonattainment area, all future site-specific PM 2.5 design values were below the concentration specified in the...
Activation Energies of Fragmentations of Disaccharides by Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Kuki, Ákos; Nagy, Lajos; Szabó, Katalin E.; Antal, Borbála; Zsuga, Miklós; Kéki, Sándor
2014-03-01
A simple multiple collision model for collision induced dissociation (CID) in quadrupole was applied for the estimation of the activation energy (Eo) of the fragmentation processes for lithiated and trifluoroacetated disaccharides, such as maltose, cellobiose, isomaltose, gentiobiose, and trehalose. The internal energy-dependent rate constants k(Eint) were calculated using the Rice-Ramsperger-Kassel-Marcus (RRKM) or the Rice-Ramsperger-Kassel (RRK) theory. The Eo values were estimated by fitting the calculated survival yield (SY) curves to the experimental ones. The calculated Eo values of the fragmentation processes for lithiated disaccharides were in the range of 1.4-1.7 eV, and were found to increase in the order trehalose < maltose < isomaltose < cellobiose < gentiobiose.
Nuclear isomerism in 100Sn neighbors
NASA Astrophysics Data System (ADS)
Ishii, M.; Ishii, T.; Makishima, A.; Ogawa, M.; Momoki, G.; Ogawa, K.
1995-01-01
Data on B(E2), B(M1) and B(E1) were obtained from lifetime measurements in 103, 105, 107In, 105-108Sn and 109Sb. These data helped us to assign the nuclear configurations to the involved states. The experimental B(E2) and B(M1) values in the Sn isotopes worked as litmus paper to test the wave functions calculated on the basis of the shell model. The present calculation gave a qualitative description of M1 transitions in the Sn isotopes but has not yet succeeded in quantitative estimation of B(M1). Calculated B(E2) values were far from the reality since 100Sn was assumed there to be inert against excitation.
Effective Inflow Conditions for Turbulence Models in Aerodynamic Calculations
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Rumsey, Christopher L.
2007-01-01
The selection of inflow values at boundaries far upstream of an aircraft is considered, for one- and two-equation turbulence models. Inflow values are distinguished from the ambient values near the aircraft, which may be much smaller. Ambient values should be selected first, and inflow values that will lead to them after the decay second; this is not always possible, especially for the time scale. The two-equation decay during the approach to the aircraft is shown; often, the time scale has been set too short for this decay to be calculated accurately on typical grids. A simple remedy for both issues is to impose floor values for the turbulence variables, outside the viscous sublayer, and it is argued that overriding the equations in this manner is physically justified. Selecting laminar ambient values is easy, if the boundary layers are to be tripped, but a more common practice is to seek ambient values that will cause immediate transition in boundary layers. This opens up a wide range of values, and selection criteria are discussed. The turbulent Reynolds number, or ratio of eddy viscosity to laminar viscosity has a huge dynamic range that makes it unwieldy; it has been widely mis-used, particularly by codes that set upper limits on it. The value of turbulent kinetic energy in a wind tunnel or the atmosphere is also of dubious value as an input to the model. Concretely, the ambient eddy viscosity must be small enough to preserve potential cores in small geometry features, such as flap gaps. The ambient frequency scale should also be small enough, compared with shear rates in the boundary layer. Specific values are recommended and demonstrated for airfoil flows
NASA Astrophysics Data System (ADS)
Lepore, Simone; Polkowski, Marcin; Grad, Marek
2018-02-01
The P-wave velocities (V p) within the East European Craton in Poland are well known through several seismic experiments which permitted to build a high-resolution 3D model down to 60 km depth. However, these seismic data do not provide sufficient information about the S-wave velocities (V s). For this reason, this paper presents the values of lithospheric V s and P-wave-to-S-wave velocity ratios (V p/V s) calculated from the ambient noise recorded during 2014 at "13 BB star" seismic array (13 stations, 78 midpoints) located in northern Poland. The 3D V p model in the area of the array consists of six sedimentary layers having total thickness within 3-7 km and V p in the range 1.85.3 km/s, a three-layer crystalline crust of total thickness 40 km and V p within 6.15-7.15 km/s, and the uppermost mantle, where V p is about 8.25 km/s. The V s and V p/V s values are calculated by the inversion of the surface-wave dispersion curves extracted from the noise cross correlation between all the station pairs. Due to the strong velocity differences among the layers, several modes are recognized in the 0.021 Hz frequency band: therefore, multimodal Monte Carlo inversions are applied. The calculated V s and V p/V s values in the sedimentary cover range within 0.992.66 km/s and 1.751.97 as expected. In the upper crust, the V s value (3.48 ± 0.10 km/s) is very low compared to the starting value of 3.75 ± 0.10 km/s. Consequently, the V p/V s value is very large (1.81 ± 0.03). To explain that the calculated values are compared with the ones for other old cratonic areas.
Automated forward mechanical modeling of wrinkle ridges on Mars
NASA Astrophysics Data System (ADS)
Nahm, Amanda; Peterson, Samuel
2016-04-01
One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.
Calculation of effective transport properties of partially saturated gas diffusion layers
NASA Astrophysics Data System (ADS)
Bednarek, Tomasz; Tsotridis, Georgios
2017-02-01
A large number of currently available Computational Fluid Dynamics numerical models of Polymer Electrolyte Membrane Fuel Cells (PEMFC) are based on the assumption that porous structures are mainly considered as thin and homogenous layers, hence the mass transport equations in structures such as Gas Diffusion Layers (GDL) are usually modelled according to the Darcy assumptions. Application of homogenous models implies that the effects of porous structures are taken into consideration via the effective transport properties of porosity, tortuosity, permeability (or flow resistance), diffusivity, electric and thermal conductivity. Therefore, reliable values of those effective properties of GDL play a significant role for PEMFC modelling when employing Computational Fluid Dynamics, since these parameters are required as input values for performing the numerical calculations. The objective of the current study is to calculate the effective transport properties of GDL, namely gas permeability, diffusivity and thermal conductivity, as a function of liquid water saturation by using the Lattice-Boltzmann approach. The study proposes a method of uniform water impregnation of the GDL based on the "Fine-Mist" assumption by taking into account the surface tension of water droplets and the actual shape of GDL pores.
Galactic dual population models of gamma-ray bursts
NASA Technical Reports Server (NTRS)
Higdon, J. C.; Lingenfelter, R. E.
1994-01-01
We investigate in more detail the properties of two-population models for gamma-ray bursts in the galactic disk and halo. We calculate the gamma-ray burst statistical properties, mean value of (V/V(sub max)), mean value of cos Theta, and mean value of (sin(exp 2) b), as functions of the detection flux threshold for bursts coming from both Galactic disk and massive halo populations. We consider halo models inferred from the observational constraints on the large-scale Galactic structure and we compare the expected values of mean value of (V/V(sub max)), mean value of cos Theta, and mean value of (sin(exp 2) b), with those measured by Burst and Transient Source Experiment (BATSE) and other detectors. We find that the measured values are consistent with solely Galactic populations having a range of halo distributions, mixed with local disk distributions, which can account for as much as approximately 25% of the observed BATSE bursts. M31 does not contribute to these modeled bursts. We also demonstrate, contrary to recent arguments, that the size-frequency distributions of dual population models are quite consistent with the BATSE observations.
Numerical prediction of Pelton turbine efficiency
NASA Astrophysics Data System (ADS)
Jošt, D.; Mežnar, P.; Lipej, A.
2010-08-01
This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.
NASA Astrophysics Data System (ADS)
Filho, Edilson B. A.; Moraes, Ingrid A.; Weber, Karen C.; Rocha, Gerd B.; Vasconcellos, Mário L. A. A.
2012-08-01
Morita-Baylis-Hillman Adducts (MBHA) has been recently synthesized and bio-evaluated by our research group against Leishmania amazonensis, parasite that causes cutaneous and mucocutaneous leishmaniasis. We present here a theoretical conformational study of thirty-two leismanicidal MBHA by B3LYP/6-31+g(d) calculations with Polarized Continuum Model (PCM) to simulate water influence. Intramolecular Hydrogen Bonds (IHBs) indicated to control the most conformational preferences of MBHA. Quantum Theory Atoms in Molecules (QTAIM) calculations were able to characterize these interactions at Bond Critical Point level. Compounds presenting an unusual seven member IHB between NO2 group and hydroxyl moiety, supported by experimental spectroscopic data, showed a considerable improvement of biological activity (lower IC50 values). These results are in accordance to redox NO2 mechanism of action. Based on structural observations, some molecular descriptors were calculated and submitted to Quantitative Structure-Activity Relationship (QSAR) studies through the PLS Regression Method. These studies provided a model with good validation parameters values (R2 = 0.71, Q2 = 0.61 and Qext2 = 0.92).
Time-dependent lethal body residues for the toxicity of pentachlorobenzene to Hyalella azteca
Landrum, Peter F.; Steevens, Jeffery A.; Gossiaux, Duane C.; McElroy, Michael; Robinson, Sander; Begnoche, Linda; Chernyak, Sergei; Hickey, James
2004-01-01
The study examined the temporal response of Hyalella azteca to pentachlorobenzene (PCBZ) in water-only exposures. Toxicity was evaluated by calculating the body residue of PCBZ associated with survival. The concentration of PCBZ in the tissues of H. azteca associated with 50% mortality decreased from 3 to 0.5 μmol/g over the temporal range of 1 to 28 d, respectively. No significant difference was observed in the body residue calculated for 50% mortality when the value was determined using live or dead organisms. Metabolism of PCBZ was not responsible for the temporal response because no detectable PCBZ biotransformation occurred over an exposure period of 10 d. A damage assessment model was used to evaluate the impact and repair of damage by PCBZ on H. azteca. The toxicokinetics were determined so that the temporal toxicity data could be fit to a damage assessment model. The half-life calculated for the elimination of PCBZ averaged approximately 49 h, while the value determined for the half-life of damage repair from the damage assessment model was 33 h.
The effects of rigid motions on elastic network model force constants.
Lezon, Timothy R
2012-04-01
Elastic network models provide an efficient way to quickly calculate protein global dynamics from experimentally determined structures. The model's single parameter, its force constant, determines the physical extent of equilibrium fluctuations. The values of force constants can be calculated by fitting to experimental data, but the results depend on the type of experimental data used. Here, we investigate the differences between calculated values of force constants and data from NMR and X-ray structures. We find that X-ray B factors carry the signature of rigid-body motions, to the extent that B factors can be almost entirely accounted for by rigid motions alone. When fitting to more refined anisotropic temperature factors, the contributions of rigid motions are significantly reduced, indicating that the large contribution of rigid motions to B factors is a result of over-fitting. No correlation is found between force constants fit to NMR data and those fit to X-ray data, possibly due to the inability of NMR data to accurately capture protein dynamics. Copyright © 2011 Wiley Periodicals, Inc.
Electrostatic effects in unfolded staphylococcal nuclease
Fitzkee, Nicholas C.; García-Moreno E, Bertrand
2008-01-01
Structure-based calculations of pK a values and electrostatic free energies of proteins assume that electrostatic effects in the unfolded state are negligible. In light of experimental evidence showing that this assumption is invalid for many proteins, and with increasing awareness that the unfolded state is more structured and compact than previously thought, a detailed examination of electrostatic effects in unfolded proteins is warranted. Here we address this issue with structure-based calculations of electrostatic interactions in unfolded staphylococcal nuclease. The approach involves the generation of ensembles of structures representing the unfolded state, and calculation of Coulomb energies to Boltzmann weight the unfolded state ensembles. Four different structural models of the unfolded state were tested. Experimental proton binding data measured with a variant of nuclease that is unfolded under native conditions were used to establish the validity of the calculations. These calculations suggest that weak Coulomb interactions are an unavoidable property of unfolded proteins. At neutral pH, the interactions are too weak to organize the unfolded state; however, at extreme pH values, where the protein has a significant net charge, the combined action of a large number of weak repulsive interactions can lead to the expansion of the unfolded state. The calculated pK a values of ionizable groups in the unfolded state are similar but not identical to the values in small peptides in water. These studies suggest that the accuracy of structure-based calculations of electrostatic contributions to stability cannot be improved unless electrostatic effects in the unfolded state are calculated explicitly. PMID:18227429
The calculation of neutron capture gamma-ray yields for space shielding applications
NASA Technical Reports Server (NTRS)
Yost, K. J.
1972-01-01
The application of nuclear models to the calculation of neutron capture and inelastic scattering gamma yields is discussed. The gamma ray cascade model describes the cascade process in terms of parameters which either: (1) embody statistical assumptions regarding electric and magnetic multipole transition strengths, level densities, and spin and parity distributions or (2) are fixed by experiment such as measured energies, spin and parity values, and transition probabilities for low lying states.
Optical model analyses of galactic cosmic ray fragmentation in hydrogen targets
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.
1993-01-01
Quantum-mechanical optical model methods for calculating cross sections for the fragmentation of galactic cosmic ray nuclei by hydrogen targets are presented. The fragmentation cross sections are calculated with an abrasion-ablation collision formalism. Elemental and isotopic cross sections are estimated and compared with measured values for neon, sulfur, and calcium ions at incident energies between 400A MeV and 910A MeV. Good agreement between theory and experiment is obtained.
NASA Astrophysics Data System (ADS)
Li, Jian; Du, Bin; Wang, Feipeng; Yao, Wei; Yao, Shuhan
2016-02-01
Nanoparticles can generate charge carrier trapping and reduce the velocity of streamer development in insulating oils ultimately leading to an enhancement of the breakdown voltage of insulating oils. Vegetable insulating oil-based nanofluids with three sizes of monodispersed Fe3O4 nanoparticles were prepared and their trapping depths were measured by thermally stimulated method (TSC). It is found that the nanoparticle surfactant polarization can significantly influence the trapping depth of vegetable insulating oil-based nanofluids. A nanoparticle polarization model considering surfactant polarization was proposed to calculate the trapping depth of the nanofluids at different nanoparticle sizes and surfactant thicknesses. The results show the calculated values of the model are in a fairly good agreement with the experimental values.
Reproducibility of structural strength and stiffness for graphite-epoxy aircraft spoilers
NASA Technical Reports Server (NTRS)
Howell, W. E.; Reese, C. D.
1978-01-01
Structural strength reproducibility of graphite epoxy composite spoilers for the Boeing 737 aircraft was evaluated by statically loading fifteen spoilers to failure at conditions simulating aerodynamic loads. Spoiler strength and stiffness data were statistically modeled using a two parameter Weibull distribution function. Shape parameter values calculated for the composite spoiler strength and stiffness were within the range of corresponding shape parameter values calculated for material property data of composite laminates. This agreement showed that reproducibility of full scale component structural properties was within the reproducibility range of data from material property tests.
NASA Astrophysics Data System (ADS)
Maréchal, F.; Suomijärvi, T.; Blumenfeld, Y.; Azhari, A.; Bazin, D.; Brown, J. A.; Cottle, P. D.; Fauerbach, M.; Glasmacher, T.; Hirzebruch, S. E.; Jewell, J. K.; Kemper, K. W.; Mantica, P. F.; Morrissey, D. J.; Riley, L. A.; Scarpaci, J. A.; Steiner, M.
1998-12-01
We have recently studied the structure of the neutron rich sulfur isotope 40S by using elastic and inelastic proton scattering in inverse kinematics. Optical potential and folding model calculations are compared with the elastic and inelastic angular distributions. Using coupled-channel calculations, the β2 value for the 21+ excited state is determined to be 0.35±0.05. The extracted value of Mn/Mp ratio indicates a small isovector contribution to the 21+ state of 40S. The microscopic analysis of the data is compatible with the presence of a neutron skin for this nucleus.
Comparison of in situ uranium KD values with a laboratory determined surface complexation model
Curtis, G.P.; Fox, P.; Kohler, M.; Davis, J.A.
2004-01-01
Reactive solute transport simulations in groundwater require a large number of parameters to describe hydrologic and chemical reaction processes. Appropriate methods for determining chemical reaction parameters required for reactive solute transport simulations are still under investigation. This work compares U(VI) distribution coefficients (i.e. KD values) measured under field conditions with KD values calculated from a surface complexation model developed in the laboratory. Field studies were conducted in an alluvial aquifer at a former U mill tailings site near the town of Naturita, CO, USA, by suspending approximately 10 g samples of Naturita aquifer background sediments (NABS) in 17-5.1-cm diameter wells for periods of 3 to 15 months. Adsorbed U(VI) on these samples was determined by extraction with a pH 9.45 NaHCO3/Na2CO3 solution. In wells where the chemical conditions in groundwater were nearly constant, adsorbed U concentrations for samples taken after 3 months of exposure to groundwater were indistinguishable from samples taken after 15 months. Measured in situ K D values calculated from the measurements of adsorbed and dissolved U(VI) ranged from 0.50 to 10.6 mL/g and the KD values decreased with increasing groundwater alkalinity, consistent with increased formation of soluble U(VI)-carbonate complexes at higher alkalinities. The in situ K D values were compared with KD values predicted from a surface complexation model (SCM) developed under laboratory conditions in a separate study. A good agreement between the predicted and measured in situ KD values was observed. The demonstration that the laboratory derived SCM can predict U(VI) adsorption in the field provides a critical independent test of a submodel used in a reactive transport model. ?? 2004 Elsevier Ltd. All rights reserved.
Abe, Eiji; Abe, Mari
2011-08-01
With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.
Measured values of coal mine stopping resistance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oswald, N.; Prosser, B.; Ruckman, R.
2008-12-15
As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less
Personalized pseudophakic model
NASA Astrophysics Data System (ADS)
Ribeiro, F.; Castanheira-Dinis, A.; Dias, J. M.
2014-08-01
With the aim of taking into account all optical aberrations, a personalized pseudophakic optical model was designed for refractive evaluation using ray tracing software. Starting with a generic model, all clinically measurable data were replaced by personalized measurements. Data from corneal anterior and posterior surfaces were imported from a grid of elevation data obtained by topography, and a formula for the calculation of the intraocular lens (IOL) position was developed based on the lens equator. For the assessment of refractive error, a merit function minimized by the approximation of the Modulation Transfer Function values to diffraction limit values on the frequencies corresponding up to the discrimination limits of the human eye, weighted depending on the human contrast sensitivity function, was built. The model was tested on the refractive evaluation of 50 pseudophakic eyes. The developed model shows good correlation with subjective evaluation of a pseudophakic population, having the added advantage of being independent of corrective factors, allowing it to be immediately adaptable to new technological developments. In conclusion, this personalized model, which uses individual biometric values, allows for a precise refractive assessment and is a valuable tool for an accurate IOL power calculation, including in conditions to which population averages and the commonly used regression correction factors do not apply, thus achieving the goal of being both personalized and universally applicable.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
An analytical approach to obtaining JWL parameters from cylinder tests
NASA Astrophysics Data System (ADS)
Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.
2017-01-01
An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu
1996-11-01
Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timbario, Thomas A.; Timbario, Thomas J.; Laffen, Melissa J.
2011-04-12
Currently, several cost-per-mile calculators exist that can provide estimates of acquisition and operating costs for consumers and fleets. However, these calculators are limited in their ability to determine the difference in cost per mile for consumer versus fleet ownership, to calculate the costs beyond one ownership period, to show the sensitivity of the cost per mile to the annual vehicle miles traveled (VMT), and to estimate future increases in operating and ownership costs. Oftentimes, these tools apply a constant percentage increase over the time period of vehicle operation, or in some cases, no increase in direct costs at all overmore » time. A more accurate cost-per-mile calculator has been developed that allows the user to analyze these costs for both consumers and fleets. Operating costs included in the calculation tool include fuel, maintenance, tires, and repairs; ownership costs include insurance, registration, taxes and fees, depreciation, financing, and tax credits. The calculator was developed to allow simultaneous comparisons of conventional light-duty internal combustion engine (ICE) vehicles, mild and full hybrid electric vehicles (HEVs), and fuel cell vehicles (FCVs). Additionally, multiple periods of operation, as well as three different annual VMT values for both the consumer case and fleets can be investigated to the year 2024. These capabilities were included since today's “cost to own” calculators typically include the ability to evaluate only one VMT value and are limited to current model year vehicles. The calculator allows the user to select between default values or user-defined values for certain inputs including fuel cost, vehicle fuel economy, manufacturer's suggested retail price (MSRP) or invoice price, depreciation and financing rates.« less
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
Personalized Pseudophakic Model for Refractive Assessment
Ribeiro, Filomena J.; Castanheira-Dinis, António; Dias, João M.
2012-01-01
Purpose To test a pseudophakic eye model that allows for intraocular lens power (IOL) calculation, both in normal eyes and in extreme conditions, such as post-LASIK. Methods Participants: The model’s efficacy was tested in 54 participants (104 eyes) who underwent LASIK and were assessed before and after surgery, thus allowing to test the same method in the same eye after only changing corneal topography. Modelling The Liou-Brennan eye model was used as a starting point, and biometric values were replaced by individual measurements. Detailed corneal surface data were obtained from topography (Orbscan®) and a grid of elevation values was used to define corneal surfaces in an optical ray-tracing software (Zemax®). To determine IOL power, optimization criteria based on values of the modulation transfer function (MTF) weighted according to contrast sensitivity function (CSF), were applied. Results Pre-operative refractive assessment calculated by our eye model correlated very strongly with SRK/T (r = 0.959, p<0.001) with no difference of average values (16.9±2.9 vs 17.1±2.9 D, p>0.05). Comparison of post-operative refractive assessment obtained using our eye model with the average of currently used formulas showed a strong correlation (r = 0.778, p<0.001), with no difference of average values (21.5±1.7 vs 21.8±1.6 D, p>0.05). Conclusions Results suggest that personalized pseudophakic eye models and ray-tracing allow for the use of the same methodology, regardless of previous LASIK, independent of population averages and commonly used regression correction factors, which represents a clinical advantage. PMID:23056450
NASA Astrophysics Data System (ADS)
Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.
2018-04-01
This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y.J.; Sohn, G.H.; Kim, Y.J.
Typical LBB (Leak-Before-Break) analysis is performed for the highest stress location for each different type of material in the high energy pipe line. In most cases, the highest stress occurs at the nozzle and pipe interface location at the terminal end. The standard finite element analysis approach to calculate J-Integral values at the crack tip utilizes symmetry conditions when modeling near the nozzle as well as away from the nozzle region to minimize the model size and simplify the calculation of J-integral values at the crack tip. A factor of two is typically applied to the J-integral value to accountmore » for symmetric conditions. This simplified analysis can lead to conservative results especially for small diameter pipes where the asymmetry of the nozzle-pipe interface is ignored. The stiffness of the residual piping system and non-symmetries of geometry along with different material for the nozzle, safe end and pipe are usually omitted in current LBB methodology. In this paper, the effects of non-symmetries due to geometry and material at the pipe-nozzle interface are presented. Various LBB analyses are performed for a small diameter piping system to evaluate the effect a nozzle has on the J-integral calculation, crack opening area and crack stability. In addition, material differences between the nozzle and pipe are evaluated. Comparison is made between a pipe model and a nozzle-pipe interface model, and a LBB PED (Piping Evaluation Diagram) curve is developed to summarize the results for use by piping designers.« less
NASA Technical Reports Server (NTRS)
Timofeyev, Y. M.
1979-01-01
In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omar, M.S., E-mail: dr_m_s_omar@yahoo.com
2012-11-15
Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less
Lothe, Anjali G; Sinha, Alok
2017-05-01
Leachate pollution index (LPI) is an environmental index which quantifies the pollution potential of leachate generated in landfill site. Calculation of Leachate pollution index (LPI) is based on concentration of 18 parameters present in leachate. However, in case of non-availability of all 18 parameters evaluation of actual values of LPI becomes difficult. In this study, a model has been developed to predict the actual values of LPI in case of partial availability of parameters. This model generates eleven equations that helps in determination of upper and lower limit of LPI. The geometric mean of these two values results in LPI value. Application of this model to three landfill site results in LPI value with an error of ±20% for ∑ i n w i ⩾0.6. Copyright © 2016 Elsevier Ltd. All rights reserved.
NUCLEAR AND HEAVY ION PHYSICS: α-decay half-lives of superheavy nuclei and general predictions
NASA Astrophysics Data System (ADS)
Dong, Jian-Min; Zhang, Hong-Fei; Wang, Yan-Zhao; Zuo, Wei; Su, Xin-Ning; Li, Jun-Qing
2009-08-01
The generalized liquid drop model (GLDM) and the cluster model have been employed to calculate the α-decay half-lives of superheavy nuclei (SHN) using the experimental α-decay Q values. The results of the cluster model are slightly poorer than those from the GLDM if experimental Q values are used. The prediction powers of these two models with theoretical Q values from Audi et al. (QAudi) and Muntian et al. (QM) have been tested to find that the cluster model with QAudi and QM could provide reliable results for Z > 112 but the GLDM with QAudi for Z <= 112. The half-lives of some still unknown nuclei are predicted by these two models and these results may be useful for future experimental assignment and identification.
Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Byoung
2017-02-01
A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN aremore » used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .« less
Calculation of H2-He Flow with Nonequilibrium Ionization and Radiation: an Interim Report
NASA Technical Reports Server (NTRS)
Furudate, Michiko; Chang, Keun-Shik
2005-01-01
The nonequilibrium ionization process in hydrogen-helium mixture behind a strong shock wave is studied numerically using the detailed ionization rate model developed recently by Park which accounts for emission and absorption of Lyman lines. The study finds that, once the avalanche ionization is started, the Lyman line is self-absorbed. The intensity variation of the radiation at 5145 Angstroms found by Leibowitz in a shock tube experiment can be numerically reproduced by assuming that ionization behind the shock wave prior to the onset of avalanche ionization is 1.3%. Because 1.3% initial ionization is highly unlikely, Leibowitz s experimental data is deemed questionable. By varying the initial electron density value in the calculation, the calculated ionization equilibration time is shown to increase approximately as inverse square-root of the initial electron density value. The true ionization equilibration time is most likely much longer than the value found by Leibowitz.
THEORETICAL RESEARCH OF THE OPTICAL SPECTRA AND EPR PARAMETERS FOR Cs2NaYCl6:Dy3+ CRYSTAL
NASA Astrophysics Data System (ADS)
Dong, Hui-Ning; Dong, Meng-Ran; Li, Jin-Jin; Li, Deng-Feng; Zhang, Yi
2013-09-01
The calculated EPR parameters are in reasonable agreement with the observed values. The important material Cs2NaYCl6 doped with rare earth ions have received much attention because of its excellent optical and magnetic properties. Based on the superposition model, in this paper the crystal field energy levels, the electron paramagnetic resonance parameters g factors of Dy3+ and hyperfine structure constants of 161Dy3+ and 163Dy3+ isotopes in Cs2NaYCl6 crystal are studied by diagonalizing the 42 × 42 energy matrix. In the calculations, the contributions of various admixtures and interactions such as the J-mixing, the mixtures among the states with the same J-value, and the covalence are all considered. The calculated results are in reasonable agreement with the observed values. The results are discussed.
NASA Astrophysics Data System (ADS)
Choi, S.; Kim, C.; Kim, H. R.; Park, C.; Park, H. Y.
2015-12-01
We performed the marine magnetic and the bathymetry survey in the Lau basin for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry datasets by using Overhouser Proton Magnetometer SeaSPY(Marine Magnetics Co.) and Multi-Beam Echo Sounder EM120(Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly and reduction to the pole(RTP). The Lau basin is one of the youngest back-arc basins in the Southwest Pacific. This region was a lot of hydrothermal activities and hydrothermal deposits. In particular, Tofua Arc(TA) in the Lau basin consists of various and complex stratovolcanos(from Massoth et al., 2007).), We calculated the magnetic susceptibility distribution of the TA19-1 seamount(longitude:176°23.5'W, latitude: 22°42.5'W)area using the RTP data by 3-D magnetic inversion from Jung's previous study(2013). Based on 2D 'compact gravity inversion' by Last & Kubik(1983), we expend it to the 3D algorithm using iterative reweighted least squares method with some weight matrices. The used weight matrices are two types: 1) the minimum gradient support(MGS) that controls the spatial distribution of the solution from Porniaguine and Zhdanov(1999); 2) the depth weight that are used according to the shape of subsurface structures. From the modeling, we derived the appropriate scale factor for the use of depth weight and setting magnetic susceptibility. Furthermore, we have to enter a very small error value to control the computation of the singular point of the inversion model that was able to be easily calculated for modeling. In addition, we applied separately weighted value for the correct shape and depth of the magnetic source. We selected the best results model by change to converge of RMS. Compared between the final modeled result and RTP values in this study, they are generally similar to the each other. But the input values and the modeled values have slightly little difference. This difference is expected to have been caused by various and complex stratovolcanos, misunderstanding of regional geology distribution, modeling design, limited vertical resolution from non-uniqueness in potential field and etc. We can expect to have the better results of advanced modeling design with more geological survey data.
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Analytical calculation of vibrations of electromagnetic origin in electrical machines
NASA Astrophysics Data System (ADS)
McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka
2018-01-01
Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.
Computer modeling of current collection by the CHARGE-2 mother payload
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Lilley, J. R., Jr.; Katz, I.; Neubert, T.; Myers, Neil B.
1990-01-01
The three-dimensional computer codes NASCAP/LEO and POLAR have been used to calculate current collection by the mother payload of the CHARGE-2 rocket under conditions of positive and negative potential up to several hundred volts. For negative bias (ion collection), the calculations lie about 25 percent above the data, indicating that the ions were less dense, colder, or heavier than the input parameters. For positive bias (electron collection), NASCAP/LEO and POLAR calculations show similar agreement with the measurements at the highest altitudes. This agreement indicates that the current is classically magnetically limited, even during electron beam emission. However, the calculated values fall well below the data at lower altitudes. It is suggested that beam-plasma-neutral interactions are responsible for the high values of collected current at altitudes below 240 km.
NASA Astrophysics Data System (ADS)
Komarov, I. I.; Rostova, D. M.; Vegera, A. N.
2017-11-01
This paper presents the results of study on determination of degree and nature of influence of operating conditions of burner units and flare geometric parameters on the heat transfer in a combustion chamber of the fire-tube boilers. Change in values of the outlet gas temperature, the radiant and convective specific heat flow rate with appropriate modification of an expansion angle and a flare length was determined using Ansys CFX software package. Difference between values of total heat flow and bulk temperature of gases at the flue tube outlet calculated using the known methods for thermal calculation and defined during the mathematical simulation was determined. Shortcomings of used calculation methods based on the results of a study conducted were identified and areas for their improvement were outlined.
Experimental Guidance for Isospin Symmetry Breaking Calculations via Single Neutron Pickup Reactions
NASA Astrophysics Data System (ADS)
Leach, K. G.; Garrett, P. E.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.; Towner, I. S.
2013-03-01
Recent activity in superallowed isospin-symmetry-breaking correction calculations has prompted interest in experimental confirmation of these calculation techniques. The shellmodel set of Towner and Hardy (2008) include the opening of specific core orbitals that were previously frozen. This has resulted in significant shifts in some of the δC values, and an improved agreement of the individual corrected {F}t values with the adopted world average of the 13 cases currently included in the high-precision evaluation of Vud. While the nucleus-to-nucleus variation of {F}t is consistent with the conserved-vector-current (CVC) hypothesis of the Standard Model, these new calculations must be thoroughly tested, and guidance must be given for their improvement. Presented here are details of a 64Zn(ěcd, t)63Zn experiment, undertaken to provide such guidance.
Comparison of Taxi Time Prediction Performance Using Different Taxi Speed Decision Trees
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
In the STBO modeler and tactical surface scheduler for ATD-2 project, taxi speed decision trees are used to calculate the unimpeded taxi times of flights taxiing on the airport surface. The initial taxi speed values in these decision trees did not show good prediction accuracy of taxi times. Using the more recent, reliable surveillance data, new taxi speed values in ramp area and movement area were computed. Before integrating these values into the STBO system, we performed test runs using live data from Charlotte airport, with different taxi speed settings: 1) initial taxi speed values and 2) new ones. Taxi time prediction performance was evaluated by comparing various metrics. The results show that the new taxi speed decision trees can calculate the unimpeded taxi-out times more accurately.
Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W
2017-01-01
Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.
NASA Astrophysics Data System (ADS)
Aliberti, P.; Feng, Y.; Takeda, Y.; Shrestha, S. K.; Green, M. A.; Conibeer, G.
2010-11-01
Theoretical efficiencies of a hot carrier solar cell considering indium nitride as the absorber material have been calculated in this work. In a hot carrier solar cell highly energetic carriers are extracted from the device before thermalisation, allowing higher efficiencies in comparison to conventional solar cells. Previous reports on efficiency calculations approached the problem using two different theoretical frameworks, the particle conservation (PC) model or the impact ionization model, which are only valid in particular extreme conditions. In addition an ideal absorber material with the approximation of parabolic bands has always been considered in the past. Such assumptions give an overestimation of the efficiency limits and results can only be considered indicative. In this report the real properties of wurtzite bulk InN absorber have been taken into account for the calculation, including the actual dispersion relation and absorbance. A new hybrid model that considers particle balance and energy balance at the same time has been implemented. Effects of actual impact ionization (II) and Auger recombination (AR) lifetimes have been included in the calculations for the first time, considering the real InN band structure and thermalisation rates. It has been observed that II-AR mechanisms are useful for cell operation in particular conditions, allowing energy redistribution of hot carriers. A maximum efficiency of 43.6% has been found for 1000 suns, assuming thermalisation constants of 100 ps and ideal blackbody absorption. This value of efficiency is considerably lower than values previously calculated adopting PC or II-AR models.
2007-08-01
group. 3. Non-calculative MTL – those graded high in this factor evinced a strong link to cultural values. Collectivist values were found to be in...the baby’s needs for both dependence and autonomy molds an unconscious psychological structure in the baby – an internal working model - which, in...questionnaires that included a personality questionnaire, a cultural value questionnaire, and a leadership self efficacy questionnaire. In addition
Findlay, R P; Dimbylow, P J
2009-04-21
If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
A study on leakage radiation dose at ELV-4 electron accelerator bunker
NASA Astrophysics Data System (ADS)
Chulan, Mohd Rizal Md; Yahaya, Redzuwan; Ghazali, Abu BakarMhd
2014-09-01
Shielding is an important aspect in the safety of an accelerator and the most important aspects of a bunker shielding is the door. The bunker's door should be designed properly to minimize the leakage radiation and shall not exceed the permitted limit of 2.5μSv/hr. In determining the leakage radiation dose that passed through the door and gaps between the door and the wall, 2-dimensional manual calculations are often used. This method is hard to perform because visual 2-dimensional is limited and is also very difficult in the real situation. Therefore estimation values are normally performed. In doing so, the construction cost would be higher because of overestimate or underestimate which require costly modification to the bunker. Therefore in this study, two methods are introduced to overcome the problem such as simulation using MCNPX Version 2.6.0 software and manual calculation using 3-dimensional model from Autodesk Inventor 2010 software. The values from the two methods were eventually compared to the real values from direct measurements using Ludlum Model 3 with Model 44-9 probe survey meter.
Design of Raft Foundations for High-Rise Buildings on Jointed Rock
NASA Astrophysics Data System (ADS)
Justo, J. L.; García-Núñez, J.-C.; Vázquez-Boza, M.; Justo, E.; Durand, P.; Azañón, J. M.
2014-07-01
This paper presents calculations of displacements and bending moments in a 2-m-thick reinforced-concrete foundation slab using three-dimensional finite-element software. A preliminary paper was presented by Justo et al. (Rock Mech Rock Eng 43:287-304, 2010). The slab is the base of a tower of 137 m height above foundation, supported on jointed and partly weathered basalt and scoria. Installation of rod extensometers at different depths below foundation allowed comparison between measured displacements and displacements calculated using moduli obtained from rock classification systems and three material models: elastic, Mohr-Coulomb and hardening (H). Although all three material models can provide acceptable results, the H model is preferable when there are unloading processes. Acceptable values of settlement may be achieved with medium meshing and an approximate distribution of loads. The absolute values of negative bending moments (tensions below) increase as the rock mass modulus decreases or when the mesh is refined. The paper stresses the importance of adequately representing the details of the distribution of loads and the necessity for fine meshing to obtain acceptable values of bending moments.
Photolysis Rate Coefficient Calculations in Support of SOLVE Campaign
NASA Technical Reports Server (NTRS)
Lloyd, Steven A.; Swartz, William H.
2001-01-01
The objectives for this SOLVE project were 3-fold. First, we sought to calculate a complete set of photolysis rate coefficients (j-values) for the campaign along the ER-2 and DC-8 flight tracks. En route to this goal, it would be necessary to develop a comprehensive set of input geophysical conditions (e.g., ozone profiles), derived from various climatological, aircraft, and remotely sensed datasets, in order to model the radiative transfer of the atmosphere accurately. These j-values would then need validation by comparison with flux-derived j-value measurements. The second objective was to analyze chemistry along back trajectories using the NASA/Goddard chemistry trajectory model initialized with measurements of trace atmospheric constituents. This modeling effort would provide insight into the completeness of current measurements and the chemistry of Arctic wintertime ozone loss. Finally, we sought to coordinate stellar occultation measurements of ozone (and thus ozone loss) during SOLVE using the MSX/UVISI satellite instrument. Such measurements would determine ozone loss during the Arctic polar night and represent the first significant science application of space-based stellar occultation in the Earth's atmosphere.
Nitric oxide concentration near the mesopause as deduced from ionospheric absorption measurements
NASA Astrophysics Data System (ADS)
Lastovicka, J.
The upper-D-region NO concentration is calculated on the basis of published 2775-kHz-absorption, Lyman-alpha (OSO-5), and X-ray (Solrad-9) data obtained over Central Europe in June-August 1969, 1970, and 1972. Ionization-rate and radio-wave-absorption profiles for solar zenith angles of 60, 70 and 40 deg are computed, presented graphically, and compared with model calculations to derive the NO-concentration correction coefficients necessary to make the Lyman-alpha/X-ray flux ratios of the models of Meira (1971), Baker et al. (1977), Tohmatsu and Iwagami (1976), and Tisone (1973) agree with the observed ratios. Values of the corrected NO concentration include 6.5 and 8.5 x 10 to the 13th/cu m at 78 and 90 km, respectively. The values are shown to be higher than those of standard models but within the range of observed concentrations.
An investigation of empennage buffeting
NASA Technical Reports Server (NTRS)
Lan, C. E.; Lee, I. G.
1986-01-01
Progress in the investigation of empennage buffeting in reviewed. In summary, the following tasks were accomplished: relevant literatures was reviewed; equations for calculating structural response were formulated; root-mean-square values of root bending moment for a 65-degree rigid delta wing were calculated and compared with data; and a water-tunnel test program for an F-18 model was completed.
New method to calculate the N2 evolution from mixed venous blood during the N2 washout.
Han, D; Jeng, D R; Cruz, J C; Flores, X F; Mallea, J M
2001-08-01
To model the normalized phase III slope (Sn) from N2 expirograms of the multibreath N2 washout is a challenge to researchers. Experimental measurements show that Sn increases with the number of breaths. Previously, we predicted Sn by setting the concentration (atm) of mixed venous blood (Fbi,N2) to a constant value of 0.3 after the fifth breath to calculate the amount of N2 transferred from the blood to the alveoli. As a consequence, the predicted curve of the Sn values showed a maximum before the quasi-steady state was reached. In this paper, we present a way of calculating the amount of N2 transferred from the blood to the alveoli by setting Fbi,N2 in the following way: In the first six breaths Fbi,N2 is kept constant at the initial value of 0.8 because circulation time needs at least 30 s to alter it. Thereafter, a single exponential function with respect the number of breaths is used: Fbi = 0.8 exp[0.112(6-n)], in which n is the breath number. The predicted Sn values were compared with experimental data from the literature. The assumption of an exponential decay in the N2 evolved from mixed venous blood is important in determining the shape of the Sn curve but new experimental data are needed to determine the validity of the model. We concluded that this new approach to calculate the N2 evolution from the blood is more meaningful physiologically.
NASA Astrophysics Data System (ADS)
da Cunha, Antonio R.; Duarte, Evandro L.; Lamy, M. Teresa; Coutinho, Kaline
2014-08-01
We combined theoretical and experimental studies to elucidate the important deprotonation process of Emodin in water. We used the UV/Visible spectrophotometric titration curves to obtain its pKa values, pKa1 = 8.0 ± 0.1 and pKa2 = 10.9 ± 0.2. Additionally, we obtained the pKa values of Emodin in the water-methanol mixture (1:3v/v). We give a new interpretation of the experimental data, obtaining apparent pKa1 = 6.2 ± 0.1, pKa2 = 8.3 ± 0.1 and pKa3 > 12.7. Performing quantum mechanics calculations for all possible deprotonation sites and tautomeric isomers of Emodin in vacuum and in water, we identified the sites of the first and second deprotonation. We calculated the standard deprotonation free energy of Emodin in water and the pKa1, using an explicit model of the solvent, with Free Energy Perturbation theory in Monte Carlo simulations obtaining, ΔGaq = 12.1 ± 1.4 kcal/mol and pKa1 = 8.7 ± 0.9. With the polarizable continuum model for the solvent, we obtained ΔGaq = 11.6 ± 1.0 kcal/mol and pKa1 = 8.3 ± 0.7. Both solvent models gave theoretical results in very good agreement with the experimental values.
Chen, Hung-Cheng; Hsu, Chao-Ping
2005-12-29
To calculate electronic couplings for photoinduced electron transfer (ET) reactions, we propose and test the use of ab initio quantum chemistry calculation for excited states with the generalized Mulliken-Hush (GMH) method. Configuration-interaction singles (CIS) is proposed to model the locally excited (LE) and charge-transfer (CT) states. When the CT state couples with other high lying LE states, affecting coupling values, the image charge approximation (ICA), as a simple solvent model, can lower the energy of the CT state and decouple the undesired high-lying local excitations. We found that coupling strength is weakly dependent on many details of the solvent model, indicating the validity of the Condon approximation. Therefore, a trustworthy value can be obtained via this CIS-GMH scheme, with ICA used as a tool to improve and monitor the quality of the results. Systems we tested included a series of rigid, sigma-linked donor-bridge-acceptor compounds where "through-bond" coupling has been previously investigated, and a pair of molecules where "through-space" coupling was experimentally demonstrated. The calculated results agree well with experimentally inferred values in the coupling magnitudes (for both systems studied) and in the exponential distance dependence (for the through-bond series). Our results indicate that this new scheme can properly account for ET coupling arising from both through-bond and through-space mechanisms.
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
NASA Astrophysics Data System (ADS)
Johri, Manoj; Johri, Gajendra K.; Rishishwar, Rajendra P.
1990-12-01
The study of spectral lineshape is important to understand intermolecular forces1-5. We have calculated the linewidth and the lineshift for different rotation-vibration transitions of linear molecules (CO and HCl) perturbed by argon using generalized interaction potential4. The Murphy Boggs6 (MB), Mehrotra Boggs7 and perturbation theories have been used for the linewidth calculation. The lineshift parameters have been calculated using the MEB theory7 including the phase shift effect and ignoring Ji=Ji and Jf=Jf transitions. In these calculation the variation of the rotational constant with the vibrational quantum number has been taken into account. The calculated lineshift parameters decrease with an increase in the initial rotation quamtum numbers (Ji). It remains positive for the lower values of Ji and becomes negative for the higher values of Ji where as the measured8 values are negative for all the transitions. The calculated linewidth parameters using the MEB theory7 are lower by about 15% than the measured values for CO-A collisions. The vibrational dependence in CO-A collisions show significant change in the lineshift. For H Cl-A collisions the discrepancy between the calculated lienwidth parameters using the Mehrotra Boggs theory and the measured9 values is about 46% for J=0-1 transitions and decreases to 22% for J=8-9 transition. The results of the perturbation theory do not show regular variation of the linewidth parameters with the rotational state. The linewidth parameters using the Murphy Boggs theory are lower than the measured9 values by about 50% for all the transitions considered. It is found that the contribution of the diabetic collisions is important as included in the perturbtive and the Mehrotra Boggs approaches. Further, if the pressure broadening method is used to probe anisotropy of the intermolecular forces, there is need of modifying the existing theoretical models and the experimental techniques.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey
Parsons, T.
2004-01-01
New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.
Protein dielectric constants determined from NMR chemical shift perturbations.
Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik
2013-11-13
Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.
Espinosa, J R; Young, J M; Jiang, H; Gupta, D; Vega, C; Sanz, E; Debenedetti, P G; Panagiotopoulos, A Z
2016-10-21
Direct coexistence molecular dynamics simulations of NaCl solutions and Lennard-Jones binary mixtures were performed to explore the origin of reported discrepancies between solubilities obtained by direct interfacial simulations and values obtained from the chemical potentials of the crystal and solution phases. We find that the key cause of these discrepancies is the use of crystal slabs of insufficient width to eliminate finite-size effects. We observe that for NaCl crystal slabs thicker than 4 nm (in the direction perpendicular to the interface), the same solubility values are obtained from the direct coexistence and chemical potential routes, namely, 3.7 ± 0.2 molal at T = 298.15 K and p = 1 bar for the JC-SPC/E model. Such finite-size effects are absent in the Lennard-Jones system and are likely caused by surface dipoles present in the salt crystals. We confirmed that μs-long molecular dynamics runs are required to obtain reliable solubility values from direct coexistence calculations, provided that the initial solution conditions are near the equilibrium solubility values; even longer runs are needed for equilibration of significantly different concentrations. We do not observe any effects of the exposed crystal face on the solubility values or equilibration times. For both the NaCl and Lennard-Jones systems, the use of a spherical crystallite embedded in the solution leads to significantly higher apparent solubility values relative to the flat-interface direct coexistence calculations and the chemical potential values. Our results have broad implications for the determination of solubilities of molecular models of ionic systems.
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
NASA Astrophysics Data System (ADS)
Wu, Xiaoru; Gao, Yingyu; Ban, Chunlan; Huang, Qiang
2016-09-01
In this paper the results of the vapor-liquid equilibria study at 100 kPa are presented for two binary systems: α-phenylethylamine(1) + toluene (2) and (α-phenylethylamine(1) + cyclohexane(2)). The binary VLE data of the two systems were correlated by the Wilson, NRTL, and UNIQUAC models. For each binary system the deviations between the results of the correlations and the experimental data have been calculated. For the both binary systems the average relative deviations in temperature for the three models were lower than 0.99%. The average absolute deviations in vapour phase composition (mole fractions) and in temperature T were lower than 0.0271 and 1.93 K, respectively. Thermodynamic consistency has been tested for all vapor-liquid equilibrium data by the Herrington method. The values calculated by Wilson and NRTL equations satisfied the thermodynamics consistency test for the both two systems, while the values calculated by UNIQUAC equation didn't.
A mesoscopic simulation of static and dynamic wetting using many-body dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Ghorbani, Najmeh; Pishevar, Ahmadreza
2018-01-01
A many-body dissipative particle dynamics simulation is applied here to pave the way for investigating the behavior of mesoscale droplets after impact on horizontal solid substrates. First, hydrophobic and hydrophilic substrates are simulated through tuning the solid-liquid interfacial interaction parameters of an innovative conservative force model. The static contact angles are calculated on homogeneous and several patterned surfaces and compared with the predicted values by the Cassie's law in order to verify the model. The results properly evaluate the amount of increase in surface superhydrophobicity as a result of surface patterning. Then drop impact phenomenon is studied by calculating the spreading factor and dimensionless height versus dimensionless time and the comparisons made between the results and the experimental values for three different static contact angles. The results show the capability of the procedure in calculating the amount of maximum spreading factor, which is a significant concept in ink-jet printing and coating process.
The effects of rigid motions on elastic network model force constants
Lezon, Timothy R.
2012-01-01
Elastic network models provide an efficient way to quickly calculate protein global dynamics from experimentally determined structures. The model’s single parameter, its force constant, determines the physical extent of equilibrium fluctuations. The values of force constants can be calculated by fitting to experimental data, but the results depend on the type of experimental data used. Here we investigate the differences between calculated values of force constants _t to data from NMR and X-ray structures. We find that X-ray B factors carry the signature of rigid-body motions, to the extent that B factors can be almost entirely accounted for by rigid motions alone. When fitting to more refined anisotropic temperature factors, the contributions of rigid motions are significantly reduced, indicating that the large contribution of rigid motions to B factors is a result of over-fitting. No correlation is found between force constants fit to NMR data and those fit to X-ray data, possibly due to the inability of NMR data to accurately capture protein dynamics. PMID:22228562
Musil, Karel; Florianova, Veronika; Bucek, Pavel; Dohnal, Vlastimil; Kuca, Kamil; Musilek, Kamil
2016-01-05
Acetylcholinesterase reactivators (oximes) are compounds used for antidotal treatment in case of organophosphorus poisoning. The dissociation constants (pK(a1)) of ten standard or promising acetylcholinesterase reactivators were determined by ultraviolet absorption spectrometry. Two methods of spectra measurement (UV-vis spectrometry, FIA/UV-vis) were applied and compared. The soft and hard models for calculation of pK(a1) values were performed. The pK(a1) values were recommended in the range 7.00-8.35, where at least 10% of oximate anion is available for organophosphate reactivation. All tested oximes were found to have pK(a1) in this range. The FIA/UV-vis method provided rapid sample throughput, low sample consumption, high sensitivity and precision compared to standard UV-vis method. The hard calculation model was proposed as more accurate for pK(a1) calculation. Copyright © 2015 Elsevier B.V. All rights reserved.
Detonation Performance Analyses for Recent Energetic Molecules
NASA Astrophysics Data System (ADS)
Stiel, Leonard; Samuels, Philip; Spangler, Kimberly; Iwaniuk, Daniel; Cornell, Rodger; Baker, Ernest
2017-06-01
Detonation performance analyses were conducted for a number of evolving and potential high explosive materials. The calculations were completed for theoretical maximum densities of the explosives using the Jaguar thermo-chemical equation of state computer programs for performance evaluations and JWL/JWLB equations of state parameterizations. A number of recently synthesized materials were investigated for performance characterizations and comparisons to existing explosives, including TNT, RDX, HMX, and Cl-20. The analytic cylinder model was utilized to establish cylinder and Gurney velocities as functions of the radial expansions of the cylinder for each explosive. The densities and heats of formulation utilized in the calculations are primarily experimental values from Picatinny Arsenal and other sources. Several of the new materials considered were predicted to have enhanced detonation characteristics compared to conventional explosives. In order to confirm the accuracy of the Jaguar and analytic cylinder model results, available experimental detonation and Gurney velocities for representative energetic molecules and their formulations were compared with the corresponding calculated values. Close agreement was obtained with most of the data. Presently at NATO.
NASA Astrophysics Data System (ADS)
Panagoulia, D.; Trichakis, I.
2012-04-01
Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.
The Effect of Roughness Model on Scattering Properties of Ice Crystals.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Van Diedenhoven, Bastiaan
2016-01-01
We compare stochastic models of microscale surface roughness assuming uniform and Weibull distributions of crystal facet tilt angles to calculate scattering by roughened hexagonal ice crystals using the geometric optics (GO) approximation. Both distributions are determined by similar roughness parameters, while the Weibull model depends on the additional shape parameter. Calculations were performed for two visible wavelengths (864 nm and 410 nm) for roughness values between 0.2 and 0.7 and Weibull shape parameters between 0 and 1.0 for crystals with aspect ratios of 0.21, 1 and 4.8. For this range of parameters we find that, for a given roughness level, varying the Weibull shape parameter can change the asymmetry parameter by up to about 0.05. The largest effect of the shape parameter variation on the phase function is found in the backscattering region, while the degree of linear polarization is most affected at the side-scattering angles. For high roughness, scattering properties calculated using the uniform and Weibull models are in relatively close agreement for a given roughness parameter, especially when a Weibull shape parameter of 0.75 is used. For smaller roughness values, a shape parameter close to unity provides a better agreement. Notable differences are observed in the phase function over the scattering angle range from 5deg to 20deg, where the uniform roughness model produces a plateau while the Weibull model does not.
2014-01-01
We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264
NASA Astrophysics Data System (ADS)
Harrach, Robert J.; Rogers, Forest J.
1981-09-01
Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy ɛ and average degree of ionization Z¯*, as functons of temperature T and density ρ. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in ɛ and the full range of Z¯* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of ɛ(T) agree to within ±30% over nearly the full range in T for densities below about 1 g/cm3. Similarly, the two models predict values of Z¯*(T) which track each other fairly well; above 20 eV the discrepancy is less than ±20% fpr ρ≲1 g/cm3. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.
Dose Calculation For Accidental Release Of Radioactive Cloud Passing Over Jeddah
NASA Astrophysics Data System (ADS)
Alharbi, N. D.; Mayhoub, A. B.
2011-12-01
For the evaluation of doses after the reactor accident, in particular for the inhalation dose, a thorough knowledge of the concentration of the various radionuclide in air during the passage of the plume is required. In this paper we present an application of the Gaussian Plume Model (GPM) to calculate the atmospheric dispersion and airborne radionuclide concentration resulting from radioactive cloud over the city of Jeddah (KSA). The radioactive cloud is assumed to be emitted from a reactor of 10 MW power in postulated accidental release. Committed effective doses (CEDs) to the public at different distance from the source to the receptor are calculated. The calculations were based on meteorological condition and data of the Jeddah site. These data are: pasquill atmospheric stability is the class B and the wind speed is 2.4m/s at 10m height in the N direction. The residence time of some radionuclides considered in this study were calculated. The results indicate that, the values of doses first increase with distance, reach a maximum value and then gradually decrease. The total dose received by human is estimated by using the estimated values of residence time of each radioactive pollutant at different distances.
Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).
Bag, Arijit; Ghorai, Pradip Kr
2016-05-01
Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The electronic and optical properties of quantum nano-structures
NASA Astrophysics Data System (ADS)
Ham, Heon
In semiconducting quantum nano-structures, the excitonic effects play an important role when we fabricate opto-electronic devices, such as lasers, diodes, detectors, etc. To gain a better understanding of the excitonic effects in quantum nano-structures, we investigated the exciton binding energy, oscillator strength, and linewidth in quantum nano-structures using both the infinite and finite well models. We investigated also the hydrogenic impurity binding energy and the photoionization cross section of the hydrogenic impurity in a spherical quantum dot. In our work, the variational approach is used in all calculations, because the Hamiltonian of the system is not separable, due to the different symmetries of the Coulomb and confining potentials. In the infinite well model of the semiconducting quantum nanostructures, the binding energy of the exciton increases with decreasing width of the potential barriers due to the increase in the effective strength of the Coulomb interaction between the electron and hole. In the finite well model, the exciton binding energy reaches a peak value, and the binding energy decreases with further decrease in the width of the potential barriers. The exciton linewidth in the infinite well model increases with decreasing wire radius, because the scattering rate of the exciton increases with decreasing wire radius. In the finite well model, the exciton linewidth in a cylindrical quantum wire reaches a peak value and the exciton linewidth decreases with further decrease in the wire radius, because the exciton is not well confined at very smaller wire radii. The binding energy of the hydrogenic impurity in a spherical quantum dot has also calculated using both the infinite and the finite well models. The binding energy of the hydrogenic impurity was calculated for on center and off center impurities in the spherical quantum dots. With decreasing radii of the dots, the binding energy of the hydrogenic impurity increases in the infinite well model. The binding energy of the hydrogenic impurity in the finite well model reaches a peak value and decreases with further decrease in the dot radii for both on center and off center impurities. We have calculated the photoionization cross section as a function of the radius and the frequency using both the infinite and finite well models. The photoionizaton cross section has a peak value at a frequency where the photon energy equals the difference between the final and initial state energies of the impurity. The behavior of the cross section with dot radius depends upon the location of the impurity and the polarization of the electromagnetic field.
Electrode Models for Electric Current Computed Tomography
CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.
2016-01-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280
Electrode models for electric current computed tomography.
Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G
1989-09-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.
Low-energy proton induced M X-ray production cross sections for 70Yb, 81Tl and 82Pb
NASA Astrophysics Data System (ADS)
Shehla; Mandal, A.; Kumar, Ajay; Roy Chowdhury, M.; Puri, Sanjiv; Tribedi, L. C.
2018-07-01
The cross sections for production of Mk (k = Mξ, Mαβ, Mγ, Mm1) X-rays of 70Yb, 81Tl and 82Pb induced by 50-250 keV protons have been measured in the present work. The experimental cross sections have been compared with the earlier reported values and those calculated using the ionization cross sections based on the ECPSSR (Perturbed (P) stationary(S) state(S), incident ion energy (E) loss, Coulomb (C) deflection and relativistic (R) correction) model, the X-ray emission rates based on the Dirac-Fock model, the fluorescence and Coster-Kronig yields based on the Dirac-Hartree-Slater (DHS) model. In addition, the present measured proton induced X-ray production cross sections have also been compared with those calculated using the Dirac-Hartree-Slater (DHS) model based ionization cross sections and those based on the Plane wave Born Approximation (PWBA). The measured M X-ray production cross sections are, in general, found to be higher than the ECPSSR and DHS model based values and lower than the PWBA model based cross sections.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
Constraining Star Formation in Old Stellar Populations from Theoretical Spectra
NASA Astrophysics Data System (ADS)
Peterson, R. C.
2007-12-01
We are calculating stellar spectra using Kurucz codes, Castelli models, and Kurucz laboratory lines plus guesses; but must first finish adjusting gf values to match stars of solar metallicity and higher. We show that even now, 1D LTE spectral calculations fit a wide range of stellar spectra (from A to K types) over 2200 Å-9000Å once gf values are set to optimize them. Moreover, weighted coadditions of spectral calculations can be constructed that match M31 globular clusters over this entire wavelength range. Both stellar and composite grids will be archived on MAST. The age-metallicity degeneracy can be broken, but only with high-quality data, and only if rare stages of stellar evolution are incorporated where necessary.
Dose conversion coefficients for photon exposure of the human eye lens.
Behrens, R; Dietze, G
2011-01-21
In recent years, several papers dealing with the eye lens dose have been published, because epidemiological studies implied that the induction of cataracts occurs even at eye lens doses of less than 500 mGy. Different questions were addressed: Which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens? Is a new definition of the dose quantity H(p)(3) based on a cylinder phantom to represent the human head necessary? Are current conversion coefficients from fluence to equivalent dose to the lens sufficiently accurate? To investigate the latter question, a realistic model of the eye including the inner structure of the lens was developed. Using this eye model, conversion coefficients for electrons have already been presented. In this paper, the same eye model-with the addition of the whole body-was used to calculate conversion coefficients from fluence (and air kerma) to equivalent dose to the lens for photon radiation from 5 keV to 10 MeV. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are similar between 40 keV and 1 MeV and lower by up to a factor of 5 and 7 for photon energies at about 10 keV and 10 MeV, respectively. Above 1 MeV, the new values (calculated without kerma approximation) should be applied in pure photon radiation fields, while the values adopted by the ICRP in 1996 (calculated with kerma approximation) should be applied in case a significant contribution from secondary electrons originating outside the body is present.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2017-07-01
We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.
Gamow-Teller Strength Distributions for pf-shell Nuclei and its Implications in Astrophysics
NASA Astrophysics Data System (ADS)
Rahman, M.-U.; Nabi, J.-U.
2009-08-01
The {pf}-shell nuclei are present in abundance in the pre-supernova and supernova phases and these nuclei are considered to play an important role in the dynamics of core collapse supernovae. The B(GT) values are calculated for the {pf}-shell nuclei 55Co and 57Zn using the pn-QRPA theory. The calculated B(GT) strengths have differences with earlier reported shell model calculations, however, the results are in good agreement with the experimental data. These B(GT) strengths are used in the calculations of weak decay rates which play a decisive role in the core-collapse supernovae dynamics and nucleosynthesis. Unlike previous calculations the so-called Brink's hypothesis is not assumed in the present calculation which leads to a more realistic estimate of weak decay rates. The electron capture rates are calculated over wide grid of temperature ({0.01} × 109 - 30 × 109 K) and density (10-1011 g-cm-3). Our rates are enhanced compared to the reported shell model rates. This enhancement is attributed partly to the liberty of selecting a huge model space, allowing consideration of many more excited states in the present electron capture rates calculations.
Finite Element Analysis of Walking Beam of a New Compound Adjustment Balance Pumping Unit
NASA Astrophysics Data System (ADS)
Wu, Jufei; Wang, Qian; Han, Yunfei
2017-12-01
In this paper, taking the designer of the new compound balance pumping unit beam as our research target, the three-dimensional model is established by Solid Works, the load and the constraint are determined. ANSYS Workbench is used to analyze the tail and the whole of the beam, the stress and deformation are obtained to meet the strength requirements. The finite element simulation and theoretical calculation of the moment of the center axis beam are carried out. The finite element simulation results are compared with the calculated results of the theoretical mechanics model to verify the correctness of the theoretical calculation. Finally, the finite element analysis is consistent with the theoretical calculation results. The theoretical calculation results are preferable, and the bending moment value provides the theoretical reference for the follow-up optimization and research design.
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
NASA Astrophysics Data System (ADS)
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
2018-03-01
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
2018-01-19
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil–brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement.more » The structure factor, A 2, and transient strain limit factor, K o, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K o, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K0, multiplication factors A 2 F and K o F are defined, respectively. The A 2 F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. In conclusion, the geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.« less
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil–brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement.more » The structure factor, A 2, and transient strain limit factor, K o, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K o, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K0, multiplication factors A 2 F and K o F are defined, respectively. The A 2 F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. In conclusion, the geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feister, Uwe; Meyer, Gabriele; Kirst, Ulrich
2013-05-10
Seamen working on vessels that go along tropical and subtropical routes are at risk to receive high doses of solar erythemal radiation. Due to small solar zenith angles and low ozone values, UV index and erythemal dose are much higher than at mid-and high latitudes. UV index values at tropical and subtropical Oceans can exceed UVI = 20, which is more than double of typical mid-latitude UV index values. Daily erythemal dose can exceed the 30-fold of typical midlatitude winter values. Measurements of erythemal exposure of different body parts on seamen have been performed along 4 routes of merchant vessels.more » The data base has been extended by two years of continuous solar irradiance measurements taken on the mast top of RV METEOR. Radiative transfer model calculations for clear sky along the ship routes have been performed that use satellite-based input for ozone and aerosols to provide maximum erythemal irradiance and dose. The whole data base is intended to be used to derive individual erythemal exposure of seamen during work-time.« less
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
Seismic hazard, risk, and design for South America
Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison
2018-01-01
We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best available science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leheta, D; Shvydka, D; Parsai, E
2015-06-15
Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less
40 CFR 89.210 - Maintenance of records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... limits (FEL); (3) Power rating for each configuration tested; (4) Projected applicable production/sales volume for the model year; and (5) Actual applicable production/sales volume for the model year. (c) Any... actual quarterly and cumulative applicable production/sales volume; (3) The values required to calculate...
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Initial comparison of single cylinder Stirling engine computer model predictions with test results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.
Continuous variation caused by genes with graduated effects.
Matthysse, S; Lange, K; Wagener, D K
1979-01-01
The classical polygenic theory of inheritance postulates a large number of genes with small, and essentially similar, effects. We propose instead a model with genes of gradually decreasing effects. The resulting phenotypic distribution is not normal; if the gene effects are geometrically decreasing, it can be triangular. The joint distribution of parent and offspring genic value is calculated. The most readily testable difference between the two models is that, in the decreasing-effect model, the variance of the offspring distribution from given parents depends on the parents' genic values. The more the parents deviate from the mean, the smaller the variance of the offspring should be. In the equal-effect model the offspring variance is independent of the parents' genic values. PMID:288073
Asymmetric behavior of the B(E2↑;0+ → 2+) values in 104-130Sn and generalized seniority
NASA Astrophysics Data System (ADS)
Maheshwari, Bhoomika; Jain, Ashok Kumar; Singh, Balraj
2016-08-01
We present freshly evaluated B (E 2 ↑ ;0+ →2+) values across the even-even Sn-isotopes which confirm the presence of an asymmetric behavior as well as a dip in the middle of the full valence space. We explain these features by using the concept of generalized seniority. The dip in the B (E 2) values near 116Sn is understood in terms of a change in the dominant orbits before and after the mid shell, which also explains the presence of asymmetric peaks in the B (E 2) values. This approach helps in deciding the most active valence spaces for a given set of isotopes, and single out the most useful truncation scheme for Large Scale Shell Model (LSSM) calculations. The LSSM calculations so guided by generalized seniority are also able to reproduce the experimental data on B (E 2) ↑ values quite well.
Indriect Measurement Of Nitrogen In A Mult-Component Natural Gas By Heating The Gas
Morrow, Thomas B.; Behring, II, Kendricks A.
2004-06-22
Methods of indirectly measuring the nitrogen concentration in a natural gas by heating the gas. In two embodiments, the heating energy is correlated to the speed of sound in the gas, the diluent concentrations in the gas, and constant values, resulting in a model equation. Regression analysis is used to calculate the constant values, which can then be substituted into the model equation. If the diluent concentrations other than nitrogen (typically carbon dioxide) are known, the model equation can be solved for the nitrogen concentration.
A fuzzy model for assessing risk of occupational safety in the processing industry.
Tadic, Danijela; Djapan, Marko; Misita, Mirjana; Stefanovic, Miladin; Milanovic, Dragan D
2012-01-01
Managing occupational safety in any kind of industry, especially in processing, is very important and complex. This paper develops a new method for occupational risk assessment in the presence of uncertainties. Uncertain values of hazardous factors and consequence frequencies are described with linguistic expressions defined by a safety management team. They are modeled with fuzzy sets. Consequence severities depend on current hazardous factors, and their values are calculated with the proposed procedure. The proposed model is tested with real-life data from fruit processing firms in Central Serbia.
Conserved charge fluctuations at vanishing and non-vanishing chemical potential
NASA Astrophysics Data System (ADS)
Karsch, Frithjof
2017-11-01
Up to 6th order cumulants of fluctuations of net baryon-number, net electric charge and net strangeness as well as correlations among these conserved charge fluctuations are now being calculated in lattice QCD. These cumulants provide a wealth of information on the properties of strong-interaction matter in the transition region from the low temperature hadronic phase to the quark-gluon plasma phase. They can be used to quantify deviations from hadron resonance gas (HRG) model calculations which frequently are used to determine thermal conditions realized in heavy ion collision experiments. Already some second order cumulants like the correlations between net baryon-number and net strangeness or net electric charge differ significantly at temperatures above 155 MeV in QCD and HRG model calculations. We show that these differences increase at non-zero baryon chemical potential constraining the applicability range of HRG model calculations to even smaller values of the temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Z.; Klann, R. T.; Nuclear Engineering Division
2007-08-03
An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
The effect of the hot oxygen corona on the interaction of the solar wind with Venus
NASA Technical Reports Server (NTRS)
Belotserkovskii, O. M.; Mitnitskii, V. IA.; Breus, T. K.; Krymskii, A. M.; Nagy, A. F.
1987-01-01
A numerical gasdynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gasdynamic model that includes the effects of mass loading can be used to predict these parameters.
Modeling the surface evapotranspiration over the southern Great Plains
NASA Technical Reports Server (NTRS)
Liljegren, J. C.; Doran, J. C.; Hubbe, J. M.; Shaw, W. J.; Zhong, S.; Collatz, G. J.; Cook, D. R.; Hart, R. L.
1996-01-01
We have developed a method to apply the Simple Biosphere Model of Sellers et al to calculate the surface fluxes of sensible heat and water vapor at high spatial resolution over the domain of the US DOE's Cloud and Radiation Testbed (CART) in Kansas and Oklahoma. The CART, which is within the GCIP area of interest for the Mississippi River Basin, is an extensively instrumented facility operated as part of the DOE's Atmospheric Radiation Measurement (ARM) program. Flux values calculated with our method will be used to provide lower boundary conditions for numerical models to study the atmosphere over the CART domain.
The effect of the hot oxygen corona on the interaction of the solar wind with Venus
NASA Astrophysics Data System (ADS)
Belotserkovskii, O. M.; Breus, T. K.; Krymskii, A. M.; Mitnitskii, V. Ya.; Nagey, A. F.; Gombosi, T. I.
1987-05-01
A numerical gas dynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gas dynamic model that includes the effects of mass loading can be used to predict these parameters.
Hydrogen interaction with ferrite/cementite interface: ab initio calculations and thermodynamics
NASA Astrophysics Data System (ADS)
Mirzoev, A. A.; Verkhovykh, A. V.; Okishev, K. Yu.; Mirzaev, D. A.
2018-02-01
The paper presents the results of ab initio modelling of the interaction of hydrogen atoms with ferrite/cementite interfaces in steels and thermodynamic assessment of the ability of interfaces to trap hydrogen atoms. Modelling was performed using the density functional theory with generalised gradient approximation (GGA'96), as implemented in WIEN2k package. An Isaichev-type orientation relationship between the two phases was accepted, with a habit plane (101)c ∥ (112)α. The supercell contained 64 atoms (56 Fe and 8 C). The calculated formation energies of ferrite/cementite interface were 0.594 J/m2. The calculated trapping energy at cementite interstitial was 0.18 eV, and at the ferrite/cementite interface - 0.30 eV. Considering calculated zero-point energy, the trapping energies at cementite interstitial and ferrite/cementite interface become 0.26 eV and 0.39 eV, respectively. The values are close to other researchers' data. These results were used to construct a thermodynamic description of ferrite/cementite interface-hydrogen interaction. Absorption calculations using the obtained trapping energy values showed that even thin lamellar ferrite/cementite mixture with an interlamellar spacing smaller than 0.1 μm has noticeable hydrogen trapping ability at a temperature below 400 K.
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Beelen, Rob M. J.; de Bakker, Merijn P.; Karssenberg, Derek
2015-04-01
Constructing spatio-temporal numerical models to support risk assessment, such as assessing the exposure of humans to air pollution, often requires the integration of field-based and agent-based modelling approaches. Continuous environmental variables such as air pollution are best represented using the field-based approach which considers phenomena as continuous fields having attribute values at all locations. When calculating human exposure to such pollutants it is, however, preferable to consider the population as a set of individuals each with a particular activity pattern. This would allow to account for the spatio-temporal variation in a pollutant along the space-time paths travelled by individuals, determined, for example, by home and work locations, road network, and travel times. Modelling this activity pattern requires an agent-based or individual based modelling approach. In general, field- and agent-based models are constructed with the help of separate software tools, while both approaches should play together in an interacting way and preferably should be combined into one modelling framework, which would allow for efficient and effective implementation of models by domain specialists. To overcome this lack in integrated modelling frameworks, we aim at the development of concepts and software for an integrated field-based and agent-based modelling framework. Concepts merging field- and agent-based modelling were implemented by extending PCRaster (http://www.pcraster.eu), a field-based modelling library implemented in C++, with components for 1) representation of discrete, mobile, agents, 2) spatial networks and algorithms by integrating the NetworkX library (http://networkx.github.io), allowing therefore to calculate e.g. shortest routes or total transport costs between locations, and 3) functions for field-network interactions, allowing to assign field-based attribute values to networks (i.e. as edge weights), such as aggregated or averaged concentration values. We demonstrate the approach by using six land use regression (LUR) models developed in the ESCAPE (European Study of Cohorts for Air Pollution Effects) project. These models calculate several air pollutants (e.g. NO2, NOx, PM2.5) for the entire Netherlands at a high (5 m) resolution. Using these air pollution maps, we compare exposure of individuals calculated at their x, y location of their home, their work place, and aggregated over the close surroundings of these locations. In addition, total exposure is accumulated over daily activity patterns, summing exposure at home, at the work place, and while travelling between home and workplace, by routing individuals over the Dutch road network, using the shortest route. Finally, we illustrate how routes can be calculated with the minimum total exposure (instead of shortest distance).
An Analytical Approach to Obtaining JWL Parameters from Cylinder Tests
NASA Astrophysics Data System (ADS)
Sutton, Ben; Ferguson, James
2015-06-01
An analytical method for determining parameters for the JWL equation of state (EoS) from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated parameters and pressure-volume (p-V) curves agree with those produced by hydro-code modelling. The calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-V curve. The calculated and model energies are 8.64 GPa and 8.76 GPa respectively.
Calibration of Predictor Models Using Multiple Validation Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
The effect of the pulse repetition rate on the fast ionization wave discharge
NASA Astrophysics Data System (ADS)
Huang, Bang-Dou; Carbone, Emile; Takashima, Keisuke; Zhu, Xi-Ming; Czarnetzki, Uwe; Pu, Yi-Kang
2018-06-01
The effect of the pulse repetition rate (PRR) on the generation of high energy electrons in a fast ionization wave (FIW) discharge is investigated by both experiment and modelling. The FIW discharge is driven by nanosecond high voltage pulses and is generated in helium with a pressure of 30 mbar. The axial electric field (E z ), as the driven force of high energy electron generation, is strongly influenced by PRR. Both the measurement and the model show that, during the breakdown, the peak value of E z decreases with the PRR, while after the breakdown, the value of E z increases with the PRR. The electron energy distribution function (EEDF) is calculated with a model similar to Boeuf and Pitchford (1995 Phys. Rev. E 51 1376). It is found that, with a low value of PRR, the EEDF during the breakdown is strongly non-Maxwellian with an elevated high energy tail, while the EEDF after the breakdown is also non-Maxwellian but with a much depleted population of high energy electrons. However, with a high value of PRR, the EEDF is Maxwellian-like without much temporal variation both during and after the breakdown. With the calculated EEDF, the temporal evolution of the population of helium excited species given by the model is in good agreement with the measured optical emission, which also depends critically on the shape of the EEDF.
Oki, Delwyn S.; Meyer, William
2001-01-01
Comparisons were made between model-calculated water levels from a one-dimensional analytical model referred to as RAM (Robust Analytical Model) and those from numerical ground-water flow models using a sharp-interface model code. RAM incorporates the horizontal-flow assumption and the Ghyben-Herzberg relation to represent flow in a one-dimensional unconfined aquifer that contains a body of freshwater floating on denser saltwater. RAM does not account for the presence of a low-permeability coastal confining unit (caprock), which impedes the discharge of fresh ground water from the aquifer to the ocean, nor for the spatial distribution of ground-water withdrawals from wells, which is significant because water-level declines are greatest in the vicinity of withdrawal wells. Numerical ground-water flow models can readily account for discharge through a coastal confining unit and for the spatial distribution of ground-water withdrawals from wells. For a given aquifer hydraulic-conductivity value, recharge rate, and withdrawal rate, model-calculated steady-state water-level declines from RAM can be significantly less than those from numerical ground-water flow models. The differences between model-calculated water-level declines from RAM and those from numerical models are partly dependent on the hydraulic properties of the aquifer system and the spatial distribution of ground-water withdrawals from wells. RAM invariably predicts the greatest water-level declines at the inland extent of the aquifer where the freshwater body is thickest and the potential for saltwater intrusion is lowest. For cases in which a low-permeability confining unit overlies the aquifer near the coast, however, water-level declines calculated from numerical models may exceed those from RAM even at the inland extent of the aquifer. Since 1990, RAM has been used by the State of Hawaii Commission on Water Resource Management for establishing sustainable-yield values for the State?s aquifers. Data from the Iao aquifer, which lies on the northeastern flank of the West Maui Volcano and which is confined near the coast by caprock, are now available to evaluate the predictive capability of RAM for this system. In 1995 and 1996, withdrawal from the Iao aquifer reached the 20 million gallon per day sustainable-yield value derived using RAM. However, even before 1996, water levels in the aquifer had declined significantly below those predicted by RAM, and continued to decline in 1997. To halt the decline of water levels and to preclude the intrusion of salt-water into the four major well fields in the aquifer, it was necessary to reduce withdrawal from the aquifer system below the sustainable-yield value derived using RAM. In the Iao aquifer, the decline of measured water levels below those predicted by RAM is consistent with the results of the numerical model analysis. Relative to model-calculated water-level declines from numerical ground-water flow models, (1) RAM underestimates water-level declines in areas where a low-permeability confining unit exists, and (2) RAM underestimates water-level declines in the vicinity of withdrawal wells.
Maximum Mass of Hybrid Stars in the Quark Bag Model
NASA Astrophysics Data System (ADS)
Alaverdyan, G. B.; Vartanyan, Yu. L.
2017-12-01
The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.
NASA Astrophysics Data System (ADS)
Fedoseev, V. N.; Pisarevsky, M. I.; Balberkina, Y. N.
2018-01-01
This paper presents interconnection of dynamic and average flow rates of the coolant in a channel of complex geometry that is a basis for a generalization model of experimental data on heat transfer in various porous structures. Formulas for calculation of heat transfer of fuel rods in transversal fluid flow are acquired with the use of the abovementioned model. It is shown that the model describes a marginal case of separated flows in twisting channels where coolant constantly changes its flow direction and mixes in the communicating channels with large intensity. Dynamic speed is suggested to be identified by power for pumping. The coefficient of proportionality in general case depends on the geometry of the channel and the Reynolds number (Re). A calculation formula of the coefficient of proportionality for the narrow line rod packages is provided. The paper presents a comparison of experimental data and calculated values, which shows usability of the suggested models and calculation formulas.
Alava, Juan José; Ross, Peter S; Gobas, Frank A P C
2016-01-01
Resident killer whale populations in the NE Pacific Ocean are at risk due to the accumulation of pollutants, including polybrominated diphenyl ethers (PBDEs). To assess the impact of PBDEs in water and sediments in killer whale critical habitat, we developed a food web bioaccumulation model. The model was designed to estimate PBDE concentrations in killer whales based on PBDE concentrations in sediments and the water column throughout a lifetime of exposure. Calculated and observed PBDE concentrations exceeded the only toxicity reference value available for PBDEs in marine mammals (1500 μg/kg lipid) in southern resident killer whales but not in northern resident killer whales. Temporal trends (1993-2006) for PBDEs observed in southern resident killer whales showed a doubling time of ≈5 years. If current sediment quality guidelines available in Canada for polychlorinated biphenyls are applied to PBDEs, it can be expected that PBDE concentrations in killer whales will exceed available toxicity reference values by a large margin. Model calculations suggest that a PBDE concentration in sediments of approximately 1.0 μg/kg dw produces PBDE concentrations in resident killer whales that are below the current toxicity reference value for 95 % of the population, with this value serving as a precautionary benchmark for a management-based approach to reducing PBDE health risks to killer whales. The food web bioaccumulation model may be a useful risk management tool in support of regulatory protection for killer whales.
TOPAS/Geant4 configuration for ionization chamber calculations in proton beams
NASA Astrophysics Data System (ADS)
Wulff, Jörg; Baumann, Kilian-Simon; Verbeek, Nico; Bäumer, Christian; Timmermann, Beate; Zink, Klemens
2018-06-01
Monte Carlo (MC) calculations are a fundamental tool for the investigation of ionization chambers (ICs) in radiation fields, and for calculations in the scope of IC reference dosimetry. Geant4, as used for the toolkit TOPAS, is a major general purpose code, generally suitable for investigating ICs in primary proton beams. To provide reliable results, the impact of parameter settings and the limitations of the underlying condensed history (CH) algorithm need to be known. A Fano cavity test was implemented in Geant4 (10.03.p1) for protons, based on the existing version for electrons distributed with the Geant4 release. This self-consistent test allows the calculation to be compared with the expected result for the typical IC-like geometry of an air-filled cavity surrounded by a higher density material. Various user-selectable parameters of the CH implementation in the EMStandardOpt4 physics-list were tested for incident proton energies between 30 and 250 MeV. Using TOPAS (3.1.p1) the influence of production cuts was investigated for bare air-cavities in water, irradiated by primary protons. Detailed IC geometries for an NACP-02 plane-parallel chamber and an NE2571 Farmer-chamber were created. The overall factor f Q as a ratio between the dose-to-water and dose to the sensitive air-volume was calculated for incident proton energies between 70 and 250 MeV. The Fano test demonstrated the EMStandardOpt4 physics-list with the WentzelIV multiple scattering model as appropriate for IC calculations. If protons start perpendicular to the air cavity, no further step-size limitations are required to pass the test within 0.1%. For an isotropic source, limitations of the maximum step length within the air cavity and its surrounding as well as a limitation of the maximum fractional energy loss per step were required to pass within 0.2%. A production cut of ⩽5 μm or ∼15 keV for all particles yielded a constant result for f Q of bare air-filled cavities. The overall factor f Q for the detailed NACP-02 and NE2571 chamber models calculated with TOPAS agreed with the values of Gomà et al (2016 Phys. Med. Biol. 61 2389) within statistical uncertainties (1σ) of <0.3% for almost all energies with a maximum deviation of 0.6% at 250 MeV for the NE2571. The selection of hadronic scattering models (QGSP_BIC versus QGSP_BERT) in TOPAS impacted the results at the highest energies by 0.3% ± 0.1%. Based on the Fano cavity test, the Geant4/TOPAS Monte Carlo code, in its investigated version, can provide reliable results for IC calculations. Agreement with the detailed IC models and the published values of Gomà et al can be achieved when production cuts are reduced from the TOPAS default values. The calculations confirm the reported agreement of Gomà et al for with IAEA-TRS398 values within the given uncertainties. An additional uncertainty for the MC-calculated of ∼0.3% by hadronic interaction models should be considered.
TOPAS/Geant4 configuration for ionization chamber calculations in proton beams.
Wulff, Jörg; Baumann, Kilian-Simon; Verbeek, Nico; Bäumer, Christian; Timmermann, Beate; Zink, Klemens
2018-06-07
Monte Carlo (MC) calculations are a fundamental tool for the investigation of ionization chambers (ICs) in radiation fields, and for calculations in the scope of IC reference dosimetry. Geant4, as used for the toolkit TOPAS, is a major general purpose code, generally suitable for investigating ICs in primary proton beams. To provide reliable results, the impact of parameter settings and the limitations of the underlying condensed history (CH) algorithm need to be known. A Fano cavity test was implemented in Geant4 (10.03.p1) for protons, based on the existing version for electrons distributed with the Geant4 release. This self-consistent test allows the calculation to be compared with the expected result for the typical IC-like geometry of an air-filled cavity surrounded by a higher density material. Various user-selectable parameters of the CH implementation in the EMStandardOpt4 physics-list were tested for incident proton energies between 30 and 250 MeV. Using TOPAS (3.1.p1) the influence of production cuts was investigated for bare air-cavities in water, irradiated by primary protons. Detailed IC geometries for an NACP-02 plane-parallel chamber and an NE2571 Farmer-chamber were created. The overall factor f Q as a ratio between the dose-to-water and dose to the sensitive air-volume was calculated for incident proton energies between 70 and 250 MeV. The Fano test demonstrated the EMStandardOpt4 physics-list with the WentzelIV multiple scattering model as appropriate for IC calculations. If protons start perpendicular to the air cavity, no further step-size limitations are required to pass the test within 0.1%. For an isotropic source, limitations of the maximum step length within the air cavity and its surrounding as well as a limitation of the maximum fractional energy loss per step were required to pass within 0.2%. A production cut of ⩽5 μm or ∼15 keV for all particles yielded a constant result for f Q of bare air-filled cavities. The overall factor f Q for the detailed NACP-02 and NE2571 chamber models calculated with TOPAS agreed with the values of Gomà et al (2016 Phys. Med. Biol. 61 2389) within statistical uncertainties (1σ) of <0.3% for almost all energies with a maximum deviation of 0.6% at 250 MeV for the NE2571. The selection of hadronic scattering models (QGSP_BIC versus QGSP_BERT) in TOPAS impacted the results at the highest energies by 0.3% ± 0.1%. Based on the Fano cavity test, the Geant4/TOPAS Monte Carlo code, in its investigated version, can provide reliable results for IC calculations. Agreement with the detailed IC models and the published values of Gomà et al can be achieved when production cuts are reduced from the TOPAS default values. The calculations confirm the reported agreement of Gomà et al for [Formula: see text] with IAEA-TRS398 values within the given uncertainties. An additional uncertainty for the MC-calculated [Formula: see text] of ∼0.3% by hadronic interaction models should be considered.
Orbital stability close to asteroid 624 Hektor using the polyhedral model
NASA Astrophysics Data System (ADS)
Jiang, Yu; Baoyin, Hexi; Li, Hengnian
2018-03-01
We investigate the orbital stability close to the unique L4-point Jupiter binary Trojan asteroid 624 Hektor. The gravitational potential of 624 Hektor is calculated using the polyhedron model with observational data of 2038 faces and 1021 vertexes. Previous studies have presented three different density values for 624 Hektor. The equilibrium points in the gravitational potential of 624 Hektor with different density values have been studied in detail. There are five equilibrium points in the gravitational potential of 624 Hektor no matter the density value. The positions, Jacobian, eigenvalues, topological cases, stability, as well as the Hessian matrix of the equilibrium points are investigated. For the three different density values the number, topological cases, and the stability of the equilibrium points with different density values are the same. However, the positions of the equilibrium points vary with the density value of the asteroid 624 Hektor. The outer equilibrium points move away from the asteroid's mass center when the density increases, and the inner equilibrium point moves close to the asteroid's mass center when the density increases. There exist unstable periodic orbits near the surface of 624 Hektor. We calculated an orbit near the primary's equatorial plane of this binary Trojan asteroid; the results indicate that the orbit remains stable after 28.8375 d.
NASA Technical Reports Server (NTRS)
Gibbons, D. E.; Richard, R. R.
1979-01-01
The methods used to calculate the sensitivity parameter noise equivalent reflectance of a remote-sensing scanner are explored, and the results are compared with values measured over calibrated test sites. Data were acquired on four occasions covering a span of 4 years and providing various atmospheric conditions. One of the calculated values was based on assumed atmospheric conditions, whereas two others were based on atmospheric models. Results indicate that the assumed atmospheric conditions provide useful answers adequate for many purposes. A nomograph was developed to indicate sensitivity variations due to geographic location, time of day, and season.
40 CFR 600.203-77 - Abbreviations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.203-77 Abbreviations. The...
40 CFR 600.202-77 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.202-77 Definitions. The...
40 CFR 600.205-77 - Recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.205-77 Recordkeeping. The...
Predicting charmonium and bottomonium spectra with a quark harmonic oscillator
NASA Technical Reports Server (NTRS)
Norbury, J. W.; Badavi, F. F.; Townsend, L. W.
1986-01-01
The nonrelativistic quark model is applied to heavy (nonrelativistic) meson (two-body) systems to obtain sufficiently accurate predictions of the spin-averaged mass levels of the charmonium and bottomonium spectra as an example of the three-dimensional harmonic oscillator. The present calculations do not include any spin dependence, but rather, mass values are averaged for different spins. Results for a charmed quark mass value of 1500 MeV/c-squared show that the simple harmonic oscillator model provides good agreement with experimental values for 3P states, and adequate agreement for the 3S1 states.
Stroganov, Oleg V; Novikov, Fedor N; Zeifman, Alexey A; Stroylov, Viktor S; Chilov, Ghermes G
2011-09-01
A new graph-theoretical approach called thermodynamic sampling of amino acid residues (TSAR) has been elaborated to explicitly account for the protein side chain flexibility in modeling conformation-dependent protein properties. In TSAR, a protein is viewed as a graph whose nodes correspond to structurally independent groups and whose edges connect the interacting groups. Each node has its set of states describing conformation and ionization of the group, and each edge is assigned an array of pairwise interaction potentials between the adjacent groups. By treating the obtained graph as a belief-network-a well-established mathematical abstraction-the partition function of each node is found. In the current work we used TSAR to calculate partition functions of the ionized forms of protein residues. A simplified version of a semi-empirical molecular mechanical scoring function, borrowed from our Lead Finder docking software, was used for energy calculations. The accuracy of the resulting model was validated on a set of 486 experimentally determined pK(a) values of protein residues. The average correlation coefficient (R) between calculated and experimental pK(a) values was 0.80, ranging from 0.95 (for Tyr) to 0.61 (for Lys). It appeared that the hydrogen bond interactions and the exhaustiveness of side chain sampling made the most significant contribution to the accuracy of pK(a) calculations. Copyright © 2011 Wiley-Liss, Inc.
Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew; ...
2018-04-05
The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryce, Richard; Losada Carreno, Ignacio; Kumler, Andrew
The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectivesmore » laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.« less
NASA Astrophysics Data System (ADS)
Dorofeeva, Olga V.; Suchkova, Taisiya A.
2018-04-01
The gas-phase enthalpies of formation of four molecules with high flexibility, which leads to the existence of a large number of low-energy conformers, were calculated with the G4 method to see whether the lowest energy conformer is sufficient to achieve high accuracy in the computed values. The calculated values were in good agreement with the experiment, whereas adding the correction for conformer distribution makes the agreement worse. The reason for this effect is a large anharmonicity of low-frequency torsional motions, which is ignored in the calculation of ZPVE and thermal enthalpy. It was shown that the approximate correction for anharmonicity estimated using a free rotor model is of very similar magnitude compared with the conformer correction but has the opposite sign, and thus almost fully compensates for it. Therefore, the common practice of adding only the conformer correction is not without problems.
Ahmadi, Hamed; Rodehutscord, Markus
2017-01-01
In the nutrition literature, there are several reports on the use of artificial neural network (ANN) and multiple linear regression (MLR) approaches for predicting feed composition and nutritive value, while the use of support vector machines (SVM) method as a new alternative approach to MLR and ANN models is still not fully investigated. The MLR, ANN, and SVM models were developed to predict metabolizable energy (ME) content of compound feeds for pigs based on the German energy evaluation system from analyzed contents of crude protein (CP), ether extract (EE), crude fiber (CF), and starch. A total of 290 datasets from standardized digestibility studies with compound feeds was provided from several institutions and published papers, and ME was calculated thereon. Accuracy and precision of developed models were evaluated, given their produced prediction values. The results revealed that the developed ANN [ R 2 = 0.95; root mean square error (RMSE) = 0.19 MJ/kg of dry matter] and SVM ( R 2 = 0.95; RMSE = 0.21 MJ/kg of dry matter) models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR ( R 2 = 0.89; RMSE = 0.27 MJ/kg of dry matter). The developed ANN and SVM models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR; however, there were not obvious differences between performance of ANN and SVM models. Thus, SVM model may also be considered as a promising tool for modeling the relationship between chemical composition and ME of compound feeds for pigs. To provide the readers and nutritionist with the easy and rapid tool, an Excel ® calculator, namely, SVM_ME_pig, was created to predict the metabolizable energy values in compound feeds for pigs using developed support vector machine model.
The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.
ERIC Educational Resources Information Center
Gilbert, R.
1979-01-01
Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)
[Influence of trabecular microstructure modeling on finite element analysis of dental implant].
Shen, M J; Wang, G G; Zhu, X H; Ding, X
2016-09-01
To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.
A modified microdosimetric kinetic model for relative biological effectiveness calculation
NASA Astrophysics Data System (ADS)
Chen, Yizheng; Li, Junli; Li, Chunyan; Qiu, Rui; Wu, Zhen
2018-01-01
In the heavy ion therapy, not only the distribution of physical absorbed dose, but also the relative biological effectiveness (RBE) weighted dose needs to be taken into account. The microdosimetric kinetic model (MKM) can predict the RBE value of heavy ions with saturation-corrected dose-mean specific energy, which has been used in clinical treatment planning at the National Institute of Radiological Sciences. In the theoretical assumption of the MKM, the yield of the primary lesion is independent of the radiation quality, while the experimental data shows that DNA double strand break (DSB) yield, considered as the main primary lesion, depends on the LET of the particle. Besides, the β parameter of the MKM is constant with LET resulting from this assumption, which also differs from the experimental conclusion. In this study, a modified MKM was developed, named MMKM. Based on the experimental DSB yield of mammalian cells under the irradiation of ions with different LETs, a RBEDSB (RBE for the induction of DSB)-LET curve was fitted as the correction factor to modify the primary lesion yield in the MKM, and the variation of the primary lesion yield with LET is considered in the MMKM. Compared with the present the MKM, not only the α parameter of the MMKM for mono-energetic ions agree with the experimental data, but also the β parameter varies with LET and the variation trend of the experimental result can be reproduced on the whole. Then a spread-out Bragg peaks (SOBP) distribution of physical dose was simulated with Geant4 Monte Carlo code, and the biological and clinical dose distributions were calculated, under the irradiation of carbon ions. The results show that the distribution of clinical dose calculated with the MMKM is closed to the distribution with the MKM in the SOBP, while the discrepancy before and after the SOBP are both within 10%. Moreover, the MKM might overestimate the clinical dose at the distal end of the SOBP more than 5% because of its constant β value, while a minimal value of β is calculated with the MMKM at this position. Besides, the discrepancy of the averaged cell survival fraction in the SOBP calculated with the two models is more than 15% at the high dose level. The MMKM may provide a reference for the accurate calculation of the RBE value in heavy ion therapy.
One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis
NASA Technical Reports Server (NTRS)
Farhat, Hamidullah
1990-01-01
Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.
Highly ionized atoms in cooling gas
NASA Technical Reports Server (NTRS)
Edgar, R. J.; Chevalier, R. A.
1986-01-01
The ionization of low density gas cooling from a high temperature was calculated. The evolution during the cooling is assumed to be isochoric, isobaric, or a combination of these cases. The calculations are used to predict the column densities and ultraviolet line luminosities of highly ionized atoms in cooling gas. In a model for cooling of a hot galactic corona, it is shown that the observed value of N(N V) can be produced in the cooling gas, while the predicted value of N(Si IV) falls short of the observed value by a factor of about 5. The same model predicts fluxes of ultraviolet emission lines that are a factor of 10 lower than the claimed detections of Feldman, Brune, and Henry. Predictions are made for ultraviolet lines in cooling flows in early-type galaxies and clusters of galaxies. It is shown that the column densities of interest vary over a fairly narrow range, while the emission line luminosities are simply proportional to the mass inflow rate.
Electrophoretic Study of the SnO2/Aqueous Solution Interface up to 260 degrees C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez-Santiago, V; Fedkin, Mark V.; Wesolowski, David J
2009-01-01
An electrophoresis cell developed in our laboratory was utilized to determine the zeta potential at the SnO{sub 2} (cassiterite)/aqueous solution (10{sup -3} mol kg{sup -1} NaCl) interface over the temperature range from 25 to 260 C. Experimental techniques and methods for the calculation of zeta potential at elevated temperature are described. From the obtained zeta potential data as a function of pH, the isoelectric points (IEPs) of SnO{sub 2} were obtained for the first time. From these IEP values, the standard thermodynamic functions were calculated for the protonation-deprotonation equilibrium at the SnO{sub 2} surface, using the 1-pK surface complexation model.more » It was found that the IEP values for SnO{sub 2} decrease with increasing temperature, and this behavior is compared to the predicted values by the multisite complexation (MUSIC) model and other semitheoretical treatments, and were found to be in excellent agreement.« less
Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.
Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P
2018-01-01
Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.
Evolution of collectivity in the N =100 isotones near 170Yb
NASA Astrophysics Data System (ADS)
Karayonchev, V.; Régis, J.-M.; Jolie, J.; Blazhev, A.; Altenkirch, R.; Ansari, S.; Dannhoff, M.; Diel, F.; Esmaylzadeh, A.; Fransen, C.; Gerst, R.-B.; Moschner, K.; Müller-Gatermann, C.; Saed-Samii, N.; Stegemann, S.; Warr, N.; Zell, K. O.
2017-03-01
An experiment using the electronic γ -γ fast-timing technique was performed to measure lifetimes of the yrast states in 170Yb. The lifetime of the yrast 2+ state was determined using the slope method. The value of τ =2.33 (3 ) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4+ and 6+ states are determined using the generalized centroid difference method. The derived B (E 2 ) values are compared to calculations done using the confined beta soft model and show good agreement with the experimental values. These calculations were extended to the isotonic chain N =100 around 170Yb and show a good quantitative description of the collectivity observed along it.
Modeling the performance and cost of lithium-ion batteries for electric-drive vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, P. A.
2011-10-20
This report details the Battery Performance and Cost model (BatPaC) developed at Argonne National Laboratory for lithium-ion battery packs used in automotive transportation. The model designs the battery for a specified power, energy, and type of vehicle battery. The cost of the designed battery is then calculated by accounting for every step in the lithium-ion battery manufacturing process. The assumed annual production level directly affects each process step. The total cost to the original equipment manufacturer calculated by the model includes the materials, manufacturing, and warranty costs for a battery produced in the year 2020 (in 2010 US$). At themore » time this report is written, this calculation is the only publically available model that performs a bottom-up lithium-ion battery design and cost calculation. Both the model and the report have been publically peer-reviewed by battery experts assembled by the U.S. Environmental Protection Agency. This report and accompanying model include changes made in response to the comments received during the peer-review. The purpose of the report is to document the equations and assumptions from which the model has been created. A user of the model will be able to recreate the calculations and perhaps more importantly, understand the driving forces for the results. Instructions for use and an illustration of model results are also presented. Almost every variable in the calculation may be changed by the user to represent a system different from the default values pre-entered into the program. The distinct advantage of using a bottom-up cost and design model is that the entire power-to-energy space may be traversed to examine the correlation between performance and cost. The BatPaC model accounts for the physical limitations of the electrochemical processes within the battery. Thus, unrealistic designs are penalized in energy density and cost, unlike cost models based on linear extrapolations. Additionally, the consequences on cost and energy density from changes in cell capacity, parallel cell groups, and manufacturing capabilities are easily assessed with the model. New proposed materials may also be examined to translate bench-scale values to the design of full-scale battery packs providing realistic energy densities and prices to the original equipment manufacturer. The model will be openly distributed to the public in the year 2011. Currently, the calculations are based in a Microsoft{reg_sign} Office Excel spreadsheet. Instructions are provided for use; however, the format is admittedly not user-friendly. A parallel development effort has created an alternate version based on a graphical user-interface that will be more intuitive to some users. The version that is more user-friendly should allow for wider adoption of the model.« less
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
40 CFR 600.201-93 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.201-93 General...
40 CFR 600.201-12 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.201-12 General...
40 CFR 600.201-86 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.201-86 General...
40 CFR 600.201-08 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.201-08 General...
Tang, Céline; Giaume, Domitille; Guerlou-Demourgues, Liliane; Lefèvre, Grégory; Barboux, Philippe
2018-05-30
To design novel layered materials, bottom-up strategy is very promising. It consists of (1) synthesizing various layered oxides, (2) exfoliating them, then (3) restacking them in a controlled way. The last step is based on electrostatic interactions between different layered oxides and is difficult to control. The aim of this study is to facilitate this step by predicting the isoelectric point (IEP) of exfoliated materials. The Multisite Complexation model (MUSIC) was used for this objective and was shown to be able to predict IEP from the mean oxidation state of the metal in the (hydr)oxides, as the main parameter. Moreover, the effect of exfoliation on IEP has also been calculated. Starting from platelets with a high basal surface area over total surface area, we show that the exfoliation process has no impact on calculated IEP value, as verified with experiments. Moreover, the restacked materials containing different monometallic (hydr)oxide layers also have an IEP consistent with values calculated with the model. This study proves that MUSIC model is a useful tool to predict IEP of various complex metal oxides and hydroxides.
Nanomechanical properties of phospholipid microbubbles.
Buchner Santos, Evelyn; Morris, Julia K; Glynos, Emmanouil; Sboros, Vassilis; Koutsos, Vasileios
2012-04-03
This study uses atomic force microscopy (AFM) force-deformation (F-Δ) curves to investigate for the first time the Young's modulus of a phospholipid microbubble (MB) ultrasound contrast agent. The stiffness of the MBs was calculated from the gradient of the F-Δ curves, and the Young's modulus of the MB shell was calculated by employing two different mechanical models based on the Reissner and elastic membrane theories. We found that the relatively soft phospholipid-based MBs behave inherently differently to stiffer, polymer-based MBs [Glynos, E.; Koutsos, V.; McDicken, W. N.; Moran, C. M.; Pye, S. D.; Ross, J. A.; Sboros, V. Langmuir2009, 25 (13), 7514-7522] and that elastic membrane theory is the most appropriate of the models tested for evaluating the Young's modulus of the phospholipid shell, agreeing with values available for living cell membranes, supported lipid bilayers, and synthetic phospholipid vesicles. Furthermore, we show that AFM F-Δ curves in combination with a suitable mechanical model can assess the shell properties of phospholipid MBs. The "effective" Young's modulus of the whole bubble was also calculated by analysis using Hertz theory. This analysis yielded values which are in agreement with results from studies which used Hertz theory to analyze similar systems such as cells.
NASA Astrophysics Data System (ADS)
Dérerová, Jana; Kohút, Igor; Radwan, Anwar H.; Bielik, Miroslav
2017-12-01
The temperature model of the lithosphere along profile passing through the Red Sea region has been derived using 2D integrated geophysical modelling method. Using the extrapolation of failure criteria, lithology and calculated temperature distribution, we have constructed the rheological model of the lithosphere in the area. We have calculated the strength distribution in the lithosphere and constructed the strength envelopes for both compressional and extensional regimes. The obtained results indicate that the strength steadily decreases from the Western desert through the Eastern desert towards the Red Sea where it reaches its minimum for both compressional and extensional regime. Maximum strength can be observed in the Western desert where the largest strength reaches values of about 250-300 MPa within the upper crust on the boundary between upper and lower crust. In the Eastern desert we observe slightly decreased strength with max values about 200-250 MPa within upper crust within 15 km with compression being dominant. These results suggest mostly rigid deformation in the region or Western and Eastern desert. In the Red Sea, the strength rapidly decreases to its minimum suggesting ductile processes as a result of higher temperatures.
Simple way to calculate a UV-finite one-loop quantum energy in the Randall-Sundrum model
NASA Astrophysics Data System (ADS)
Altshuler, Boris L.
2017-04-01
The surprising simplicity of Barvinsky-Nesterov or equivalently Gelfand-Yaglom methods of calculation of quantum determinants permits us to obtain compact expressions for a UV-finite difference of one-loop quantum energies for two arbitrary values of the parameter of the double-trace asymptotic boundary conditions. This result generalizes the Gubser and Mitra calculation for the particular case of difference of "regular" and "irregular" one-loop energies in the one-brane Randall-Sundrum model. The approach developed in the paper also allows us to get "in one line" the one-loop quantum energies in the two-brane Randall-Sundrum model. The relationship between "one-loop" expressions corresponding to the mixed Robin and to double-trace asymptotic boundary conditions is traced.
Comparison of calculated and measured pressures on straight and swept-tip model rotor blades
NASA Technical Reports Server (NTRS)
Tauber, M. E.; Chang, I. C.; Caughey, D. A.; Phillipe, J. J.
1983-01-01
Using the quasi-steady, full potential code, ROT22, pressures were calculated on straight and swept tip model helicopter rotor blades at advance ratios of 0.40 and 0.45, and into the transonic tip speed range. The calculated pressures were compared with values measured in the tip regions of the model blades. Good agreement was found over a wide range of azimuth angles when the shocks on the blade were not too strong. However, strong shocks persisted longer than predicted by ROT22 when the blade was in the second quadrant. Since the unsteady flow effects present at high advance ratios primarily affect shock waves, the underprediction of shock strengths is attributed to the simplifying, quasi-steady, assumption made in ROT22.
NASA Technical Reports Server (NTRS)
Koch, L. Danielle
2012-01-01
A combined quadrupole-dipole model of fan inflow distortion tone noise has been extended to calculate tone sound power levels generated by obstructions arranged in circumferentially asymmetric locations upstream of a rotor. Trends in calculated sound power level agreed well with measurements from tests conducted in 2007 in the NASA Glenn Advanced Noise Control Fan. Calculated values of sound power levels radiated upstream were demonstrated to be sensitive to the accuracy of the modeled wakes from the cylindrical rods that were placed upstream of the fan to distort the inflow. Results indicate a continued need to obtain accurate aerodynamic predictions and measurements at the fan inlet plane as engineers work towards developing fan inflow distortion tone noise prediction tools.
Mitrikas, V G
2015-01-01
Monitoring of the radiation loading on cosmonauts requires calculation of absorbed dose dynamics with regard to the stay of cosmonauts in specific compartments of the space vehicle that differ in shielding properties and lack means of radiation measurement. The paper discusses different aspects of calculation modeling of radiation effects on human body organs and tissues and reviews the effective dose estimates for cosmonauts working in one or another compartment over the previous period of the International space station operation. It was demonstrated that doses measured by a real or personal dosimeters can be used to calculate effective dose values. Correct estimation of accumulated effective dose can be ensured by consideration for time course of the space radiation quality factor.
A model of magnetic and relaxation properties of the mononuclear [Pc2Tb](-)TBA+ complex.
Reu, O S; Palii, A V; Ostrovsky, S M; Tregenna-Piggott, P L W; Klokishner, S I
2012-10-15
The present work is aimed at the elaboration of the model of magnetic properties and magnetic relaxation in the mononuclear [Pc(2)Tb](-)TBA(+) complex that displays single-molecule magnet properties. We calculate the Stark structure of the ground (7)F(6) term of the Tb(3+) ion in the exchange charge model of the crystal field, taking account for covalence effects. The ground Stark level of the complex possesses the maximum value of the total angular momentum projection, while the energies of the excited Stark levels increase with decreasing |M(J)| values, thus giving rise to a barrier for the reversal of magnetization. The one-phonon transitions between the Stark levels of the Tb(3+) ion induced by electron-vibrational interaction are shown to lead to magnetization relaxation in the [Pc(2)Tb](-)TBA(+) complex. The rates of all possible transitions between the low-lying Stark levels are calculated in the temperature range 14 K
ERIC Educational Resources Information Center
Weerasinghe, Dash; Orsak, Timothy; Mendro, Robert
In an age of student accountability, public school systems must find procedures for identifying effective schools, classrooms, and teachers that help students continue to learn academically. As a result, researchers have been modeling schools and classrooms to calculate productivity indicators that will withstand not only statistical review but…
USDA-ARS?s Scientific Manuscript database
Direct normal irradiance (DNI) is required in the performance estimation of concentrating solar energy systems. The objective of this paper is to compare measured and modeled DNI data for a site in the Texas Panhandle (Bushland, Texas) to determine the accuracy of the model and where improvements mi...
NASA Astrophysics Data System (ADS)
Stevens, Bjorn; Moeng, Chin-Hoh; Sullivan, Peter P.
1999-12-01
Large-eddy simulations of a smoke cloud are examined with respect to their sensitivity to small scales as manifest in either the grid spacing or the subgrid-scale (SGS) model. Calculations based on a Smagorinsky SGS model are found to be more sensitive to the effective resolution of the simulation than are calculations based on the prognostic turbulent kinetic energy (TKE) SGS model. The difference between calculations based on the two SGS models is attributed to the advective transport, diffusive transport, and/or time-rate-of-change terms in the TKE equation. These terms are found to be leading order in the entrainment zone and allow the SGS TKE to behave in a way that tends to compensate for changes that result in larger or smaller resolved scale entrainment fluxes. This compensating behavior of the SGS TKE model is attributed to the fact that changes that reduce the resolved entrainment flux (viz., values of the eddy viscosity in the upper part of the PBL) simultaneously tend to increase the buoyant production of SGS TKE in the radiatively destabilized portion of the smoke cloud. Increased production of SGS TKE in this region then leads to increased amounts of transported, or fossil, SGS TKE in the entrainment zone itself, which in turn leads to compensating increases in the SGS entrainment fluxes. In the Smagorinsky model, the absence of a direct connection between SGS TKE in the entrainment and radiatively destabilized zones prevents this compensating mechanism from being active, and thus leads to calculations whose entrainment rate sensitivities as a whole reflect the sensitivities of the resolved-scale fluxes to values of upper PBL eddy viscosities.
NASA Astrophysics Data System (ADS)
Tiberi, Lara; Costa, Giovanni; Jamšek Rupnik, Petra; Cecić, Ina; Suhadolc, Peter
2018-05-01
The earthquake (Mw 6 from the SHEEC defined by the MDPs) that occurred in the central part of Slovenia on 14 April, 1895, affected a broad region, causing deaths, injuries, and destruction. This event was much studied but not fully explained; in particular, its causative source model is still debated. The aim of this work is to contribute to the identification of the seismogenic source of this destructive event, calculating peak ground velocity values through the use of different ground motion prediction equations (GMPEs) and computing a series of ground motion scenarios based on the result of an inversion work proposed by Jukić in 2009 and on various fault models in the surroundings of Ljubljana: Vič, Želimlje, Borovnica, Vodice, Ortnek, Mišjedolski, and Dobrepolje faults. The synthetic seismograms, at the basis of our computations, are calculated using the multi-modal summation technique and a kinematic approach for extended sources, with a maximum peak ground velocity value of 1 Hz. The qualitative and quantitative comparison of these simulations with the macroseismic intensity database allows us to discriminate between various sources and configurations. The quantitative validation of the seismic source is done using ad hoc ground motion to intensity conversion equations (GMICEs), expressly calculated for this study. This study allows us to identify the most probable causative source model of this event, contributing to the improvement of the seismotectonic knowledge of this region. The candidate fault that has the lowest values of average differences between observed and calculated intensities and chi-squared is a strike slip fault with a toward-north rupture as the Ortnek fault.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Finite-size scaling study of the two-dimensional Blume-Capel model
NASA Astrophysics Data System (ADS)
Beale, Paul D.
1986-02-01
The phase diagram of the two-dimensional Blume-Capel model is investigated by using the technique of phenomenological finite-size scaling. The location of the tricritical point and the values of the critical and tricritical exponents are determined. The location of the tricritical point (Tt=0.610+/-0.005, Dt=1.9655+/-0.0010) is well outside the error bars for the value quoted in previous Monte Carlo simulations but in excellent agreement with more recent Monte Carlo renormalization-group results. The values of the critical and tricritical exponents, with the exception of the leading thermal tricritical exponent, are in excellent agreement with previous calculations, conjectured values, and Monte Carlo renormalization-group studies.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Traas, T P; Luttik, R; Jongbloed, R H
1996-08-01
In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.
Calculations of the electrostatic potential adjacent to model phospholipid bilayers.
Peitzsch, R M; Eisenberg, M; Sharp, K A; McLaughlin, S
1995-03-01
We used the nonlinear Poisson-Boltzmann equation to calculate electrostatic potentials in the aqueous phase adjacent to model phospholipid bilayers containing mixtures of zwitterionic lipids (phosphatidylcholine) and acidic lipids (phosphatidylserine or phosphatidylglycerol). The aqueous phase (relative permittivity, epsilon r = 80) contains 0.1 M monovalent salt. When the bilayers contain < 11% acidic lipid, the -25 mV equipotential surfaces are discrete domes centered over the negatively charged lipids and are approximately twice the value calculated using Debye-Hückel theory. When the bilayers contain > 25% acidic lipid, the -25 mV equipotential profiles are essentially flat and agree well with the values calculated using Gouy-Chapman theory. When the bilayers contain 100% acidic lipid, all of the equipotential surfaces are flat and agree with Gouy-Chapman predictions (including the -100 mV surface, which is located only 1 A from the outermost atoms). Even our model bilayers are not simple systems: the charge on each lipid is distributed over several atoms, these partial charges are non-coplanar, there is a 2 A ion-exclusion region (epsilon r = 80) adjacent to the polar headgroups, and the molecular surface is rough. We investigated the effect of these four factors using smooth (or bumpy) epsilon r = 2 slabs with embedded point charges: these factors had only minor effects on the potential in the aqueous phase.
Calculations of the electrostatic potential adjacent to model phospholipid bilayers.
Peitzsch, R M; Eisenberg, M; Sharp, K A; McLaughlin, S
1995-01-01
We used the nonlinear Poisson-Boltzmann equation to calculate electrostatic potentials in the aqueous phase adjacent to model phospholipid bilayers containing mixtures of zwitterionic lipids (phosphatidylcholine) and acidic lipids (phosphatidylserine or phosphatidylglycerol). The aqueous phase (relative permittivity, epsilon r = 80) contains 0.1 M monovalent salt. When the bilayers contain < 11% acidic lipid, the -25 mV equipotential surfaces are discrete domes centered over the negatively charged lipids and are approximately twice the value calculated using Debye-Hückel theory. When the bilayers contain > 25% acidic lipid, the -25 mV equipotential profiles are essentially flat and agree well with the values calculated using Gouy-Chapman theory. When the bilayers contain 100% acidic lipid, all of the equipotential surfaces are flat and agree with Gouy-Chapman predictions (including the -100 mV surface, which is located only 1 A from the outermost atoms). Even our model bilayers are not simple systems: the charge on each lipid is distributed over several atoms, these partial charges are non-coplanar, there is a 2 A ion-exclusion region (epsilon r = 80) adjacent to the polar headgroups, and the molecular surface is rough. We investigated the effect of these four factors using smooth (or bumpy) epsilon r = 2 slabs with embedded point charges: these factors had only minor effects on the potential in the aqueous phase. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 4 PMID:7756540
NASA Astrophysics Data System (ADS)
Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.
2018-01-01
A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivest, R; Venkataraman, S; McCurdy, B
The objective of this work is to commission the 6MV-SRS beam model in COMPASS (v2.1, IBA-Dosimetry) and validate its use for patient specific QA of hypofractionated prostate treatments. The COMPASS system consists of a 2D ion chamber array (MatriXX{sup Evolution}), an independent gantry angle sensor and associated software. The system can either directly calculate or reconstruct (using measured detector responses) a 3D dose distribution on the patient CT dataset for plan verification. Beam models are developed and commissioned in the same manner as a beam model is commissioned in a standard treatment planning system. Model validation was initially performed bymore » comparing both COMPASS calculations and reconstructions to measured open field beam data. Next, 10 hypofractionated prostate RapidArc plans were delivered to both the COMPASS system and a phantom with ion chamber and film inserted. COMPASS dose distributions calculated and reconstructed on the phantom CT dataset were compared to the chamber and film measurements. The mean (± standard deviation) difference between COMPASS reconstructed dose and ion chamber measurement was 1.4 ± 1.0%. The maximum discrepancy was 2.6%. Corresponding values for COMPASS calculation were 0.9 ± 0.9% and 2.6%, respectively. The average gamma agreement index (3%/3mm) for COMPAS reconstruction and film was 96.7% and 95.3% when using 70% and 20% dose thresholds, respectively. The corresponding values for COMPASS calculation were 97.1% and 97.1%, respectively. Based on our results, COMPASS can be used for the patient specific QA of hypofractionated prostate treatments delivered with the 6MV-SRS beam.« less
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Teaching Time Value of Money Using an Excel Retirement Model
ERIC Educational Resources Information Center
Arellano, Fernando; Mulig, Liz; Rhame, Susan
2012-01-01
The time value of money (TVM) is required knowledge for all business students. It is traditionally taught in finance and accounting classes for use in various applications in the business curriculum. These concepts are also very useful in real life situations such as calculating the amount to save for retirement. This paper details a retirement…
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John; Gallagher, Linda K.; Whitener, Dustin
The Turbo FRMAC (TF) software automates the calculations described in volumes 1-3 of "The Federal Manual for Assessing Environmental Data During a Radiological Emergency" (2010 version). This software automates the process of assessing radiological data during a Federal Radiological Emergency. The manual upon which the software is based is unclassified and freely available on the Internet. TF takes values generated by field samples or computer dispersion models and assesses the data in a way which is meaningful to a decision maker at a radiological emergency; such as, do radiation values exceed city, state, or federal limits; should the crops bemore » destroyed or can they be utilized; do residents need to be evacuated, sheltered in place, or should another action taken. The software also uses formulas generated by the EPA, FDA, and other federal agencies to generate field observable values specific to the radiological event that can be used to determine where regulatory limit values are exceeded. In addition to these calculations, TF calculates values which indicate how long an emergency worker can work in the contaminated area during a radiological emergency, the dose received from drinking contaminated water or milk, the dose from eating contaminated food, the does expected down or upwind of a given field sample, along with a significant number of other similar radiological health values.« less
Decreasing Kd uncertainties through the application of thermodynamic sorption models.
Domènech, Cristina; García, David; Pękala, Marek
2015-09-15
Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.
Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, Daniel I.
The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculationsmore » and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, k s value, and the cementitious leachate impact factor.« less
40 CFR 600.204-77 - Section numbering, construction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.204-77...
The Jovian electron spectrum and synchrotron radiation at 375 cm
NASA Technical Reports Server (NTRS)
Birmingham, T. J.
1975-01-01
The synchrotron radiation expected at Earth from the region L=2.9-5 R sub J of Jupiter's magnetosphere is calculated using the Pioneer 10 electron model. The result is approximately 21 flux units (f.u.). This value is to be compared with 6.0 + or - 0.7 f.u., the flux density of synchrotron radiation measured from Jupiter's entire magnetosphere in ground-based radio observations. Most of the radiation at 375 cm is emitted by electrons in the 1 to 10 MeV range. If the electron model used for calculations is cut off below 10 MeV, the calculated flux is reduced to approximately 4 f.u., a level compatible with the radio observations.
Improved Frequency Fluctuation Model for Spectral Line Shape Calculations in Fusion Plasmas
NASA Astrophysics Data System (ADS)
Ferri, S.; Calisti, A.; Mossé, C.; Talin, B.; Lisitsa, V.
2010-10-01
A very fast method to calculate spectral line shapes emitted by plasmas accounting for charge particle dynamics and effects of an external magnetic field is proposed. This method relies on a new formulation of the Frequency Fluctuation Model (FFM), which yields to an expression of the dynamic line profile as a functional of the static distribution function of frequencies. This highly efficient formalism, not limited to hydrogen-like systems, allows to calculate pure Stark and Stark-Zeeman line shapes for a wide range of density, temperature and magnetic field values, which is of importance in plasma physics and astrophysics. Various applications of this method are presented for conditions related to fusion plasmas.
Growth rate of the linear Richtmyer-Meshkov instability when a shock is reflected
NASA Astrophysics Data System (ADS)
Wouchuk, J. G.
2001-05-01
An analytic model is presented to calculate the growth rate of the linear Richtmyer-Meshkov instability in the shock-reflected case. The model allows us to calculate the asymptotic contact surface perturbation velocity for any value of the incident shock intensity, arbitrary fluids compressibilities, and for any density ratio at the interface. The growth rate comes out as the solution of a system of two coupled functional equations and is expressed formally as an infinite series. The distinguishing feature of the procedure shown here is the high speed of convergence of the intermediate calculations. There is excellent agreement with previous linear simulations and experiments done in shock tubes.
NASA Technical Reports Server (NTRS)
Carpenter, Thomas W.
1991-01-01
The main objective of this project was to predict the expansion wave/oblique shock wave structure in an under-expanded jet expanding from a convergent nozzle. The shock structure was predicted by combining the calculated curvature of the free pressure boundary with principles and governing equations relating to oblique shock wave and expansion wave interaction. The procedure was then continued until the shock pattern repeated itself. A mathematical model was then formulated and written in FORTRAN to calculate the oblique shock/expansion wave structure within the jet. In order to study shock waves in expanding jets, Schlieren photography, a form of flow visualization, was employed. Thirty-six Schlieren photographs of jets from both a straight and 15 degree nozzle were taken. An iterative procedure was developed to calculate the shock structure within the jet and predict the non-dimensional values of Prandtl primary wavelength (w/rn), distance to Mach Disc (Ld) and Mach Disc radius (rd). These values were then compared to measurements taken from Schlieren photographs and experimental results. The results agreed closely to measurements from Schlieren photographs and previously obtained data. This method provides excellent results for pressure ratios below that at which a Mach Disc first forms. Calculated values of non-dimensional distance to the Mach Disc (Ld) agreed closely to values measured from Schlieren photographs and published data. The calculated values of non-dimensional Mach Disc radius (rd), however, deviated from published data by as much as 25 percent at certain pressure ratios.
NASA Astrophysics Data System (ADS)
Wang, Linjuan; Abeyaratne, Rohan
2018-07-01
The peridynamic model of a solid does not involve spatial gradients of the displacement field and is therefore well suited for studying defect propagation. Here, bond-based peridynamic theory is used to study the equilibrium and steady propagation of a lattice defect - a kink - in one dimension. The material transforms locally, from one state to another, as the kink passes through. The kink is in equilibrium if the applied force is less than a certain critical value that is calculated, and propagates if it exceeds that value. The kinetic relation giving the propagation speed as a function of the applied force is also derived. In addition, it is shown that the dynamical solutions of certain differential-equation-based models of a continuum are the same as those of the peridynamic model provided the micromodulus function is chosen suitably. A formula for calculating the micromodulus function of the equivalent peridynamic model is derived and illustrated. This ability to replace a differential-equation-based model with a peridynamic one may prove useful when numerically studying more complicated problems such as those involving multiple and interacting defects.
Analysis of angular observables of Λ_b \\to Λ (\\to pπ)μ+μ- decay in the standard and Z^' models
NASA Astrophysics Data System (ADS)
Nasrullah, Aqsa; Jamil Aslam, M.; Shafaq, Saba
2018-04-01
In 2015, the LHCb collaboration measured the differential branching ratio d{B}/dq^2, the lepton- and hadron-side forward-backward asymmetries, denoted by A^ℓ_FB and A^{Λ}_FB, respectively, in the range 15 < q^2(=s) < 20 GeV^2 with 3 fb^{-1} of data. Motivated by these measurements, we perform an analysis of q^2-dependent Λ_b \\to Λ (\\to p π ) μ^+μ^- angular observables at large- and low- recoil in the standard model (SM) and in a family non-universal Z^' model. The exclusive Λb\\to Λ transition is governed by the form factors, and in the present study we use the recently performed high-precision lattice QCD calculations that have well-controlled uncertainties, especially in the 15 < s < 20 GeV^2 bin. Using the full four-folded angular distribution of Λ_b \\to Λ (\\to p π ) μ^+μ^- decay, first of all we focus on calculations of the experimentally measured d{B}/ds, A^ℓ_FB, and A^{Λ}_FB in the SM and compare their numerical values with the measurements in appropriate bins of s. In case of a possible discrepancy between the SM prediction and the measurements, we try to see if these can be accommodated though the extra neutral Z^' boson. We find that in the dimuon momentum range 15 < s < 20 GeV^2 the value of d{B}/ds and central value of A^ℓ_FB in the Z^' model is compatible with the measured values. In addition, the fraction of longitudinal polarization of the dimuon FL was measured to be 0.61^{+0.11}_{-0.14}± 0.03 in 15 < s < 20 GeV^2 at the LHCb. We find that in this bin the value found in the Z^' model is close to the observed values. After comparing the results of these observables, we have proposed other observables such as {α}i and α^{(')}i with i =θ_{ℓ}, θ_{Λ}, φ,L, U and coefficients of different foldings P_{1, \\ldots, 9} in different bins of s in the SM and Z^' model. We illustrate that the experimental observations of the s-dependent angular observables calculated here in several bins of s can help to test the predictions of the SM and unravel new physics contributions arising due to the Z^' model in Λ_b \\to Λ (\\to p π ) μ^+μ^- decays.
Golombeck, M A; Dössel, O; Raiser, J
2003-09-01
Numerical field calculations and experimental investigations were performed to examine the heating of the surface of human skin during the application of a new electrode design for the patient return electrode. The new electrode is characterised by an equipotential ring around the central electrode pads. A multi-layer thigh model was used, to which the patient return electrode and the active electrode were connected. The simulation geometry and the dielectric tissue parameters were set according to the frequency of the current. The temperature rise at the skin surface due to the flow of current was evaluated using a two-step numerical solving procedure. The results were compared with experimental thermographical measurements that yielded a mean value of maximum temperature increase of 3.4 degrees C and a maximum of 4.5 degrees C in one test case. The calculated heating patterns agreed closely with the experimental results. However, the calculated mean value in ten different numerical models of the maximum temperature increase of 12.5 K (using a thermodynamic solver) exceeded the experimental value owing to neglect of heat transport by blood flow and also because of the injection of a higher test current, as in the clinical tests. The implementation of a simple worst-case formula that could significantly simplify the numerical process led to a substantial overestimation of the mean value of the maximum skin temperature of 22.4 K and showed only restricted applicability. The application of numerical methods confirmed the experimental assertions and led to a general understanding of the observed heating effects and hotspots. Furthermore, it was possible to demonstrate the beneficial effects of the new electrode design with an equipotential ring. These include a balanced heating pattern and the absence of hotspots.
Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael
2015-01-01
Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.