Sample records for values calculated based

  1. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...

  2. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an...

  3. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value exists for an electric...

  4. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may use fuel economy data from tests conducted on these vehicle configuration(s) at high altitude to...) Calculate the city, highway, and combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests...

  5. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...

  6. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel economy for the... combined fuel economy values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy values from the tests performed using alcohol or natural...

  7. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...

  8. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel...-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08...

  9. Tree value system: description and assumptions.

    Treesearch

    D.G. Briggs

    1989-01-01

    TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...

  10. The Lα (λ = 121.6 nm) solar plage contrasts calculations.

    NASA Astrophysics Data System (ADS)

    Bruevich, E. A.

    1991-06-01

    The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.

  11. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...

  12. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...

  13. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy, CO2 emissions, and carbon-related...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy, CO2 emissions, and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy value...

  14. Precision gravity studies at Cerro Prieto: a progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grannell, R.B.; Kroll, R.C.; Wyman, R.M.

    A third and fourth year of precision gravity data collection and reduction have now been completed at the Cerro Prieto geothermal field. In summary, 66 permanently monumented stations were occupied between December and April of 1979 to 1980 and 1980 to 1981 by a LaCoste and Romberg gravity meter (G300) at least twice, with a minimum of four replicate values obtained each time. Station 20 alternate, a stable base located on Cerro Prieto volcano, was used as the reference base for the third year and all the stations were tied to this base, using four to five hour loops. Themore » field data were reduced to observed gravity values by (1) multiplication with the appropriate calibration factor; (2) removal of calculated tidal effects; (3) calculation of average values at each station, and (4) linear removal of accumulated instrumental drift which remained after carrying out the first three reductions. Following the reduction of values and calculation of gravity differences between individual stations and the base stations, standard deviations were calculated for the averaged occupation values (two to three per station). In addition, pooled variance calculations were carried out to estimate precision for the surveys as a whole.« less

  15. Study of activity based costing implementation for palm oil production using value-added and non-value-added activity consideration in PT XYZ palm oil mill

    NASA Astrophysics Data System (ADS)

    Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.

    2018-02-01

    Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.

  16. Evaluation of steam sterilization processes: comparing calculations using temperature data and biointegrator reduction data and calculation of theoretical temperature difference.

    PubMed

    Lundahl, Gunnel

    2007-01-01

    When calculating of the physical F121.1 degrees c-value by the equation F121.1 degrees C = t x 10(T-121.1/z the temperature (T), in combination with the z-value, influences the F121.1 degrees c-value exponentially. Because the z-value for spores of Geobacillus stearothermophilus often varies between 6 and 9, the biological F-value (F(Bio) will not always correspond to the F0-value based on temperature records from the sterilization process calculated with a z-value of 10, even if the calibration of both of them are correct. Consequently an error in calibration of thermocouples and difference in z-values influences the F121.1 degrees c-values logarithmically. The paper describes how results from measurements with different z-values can be compared. The first part describes the mathematics of a calculation program, which makes it easily possible to compare F0-values based on temperature records with the F(BIO)-value based on analysis of bioindicators such as glycerin-water-suspension sensors. For biological measurements, a suitable bioindicator with a high D121-value can be used (such a bioindicator can be manufactured as described in the article "A Method of Increasing Test Range and Accuracy of Bioindicators-Geobacillus stearothermophilus Spores"). By the mathematics and calculations described in this macro program it is possible to calculate for every position the theoretical temperature difference (deltaT(th)) needed to explain the difference in results between the thermocouple and the biointegrator. Since the temperature difference is a linear function and constant all over the process this value is an indication of the magnitude of an error. A graph and table from these calculations gives a picture of the run. The second part deals with product characteristics, the sterilization processes, loading patterns. Appropriate safety margins have to be chosen in the development phase of a sterilization process to achieve acceptable safety limits. Case studies are discussed and experiences are shared.

  17. Acid-base properties of the N3 ruthenium(II) solar cell sensitizer: a combined experimental and computational analysis.

    PubMed

    Pizzoli, Giuliano; Lobello, Maria Grazia; Carlotti, Benedetta; Elisei, Fausto; Nazeeruddin, Mohammad K; Vitillaro, Giuseppe; De Angelis, Filippo

    2012-10-14

    We report a combined spectro-photometric and computational investigation of the acid-base equilibria of the N3 solar cell sensitizer [Ru(dcbpyH(2))(2)(NCS)(2)] (dcbpyH(2) = 4,4'-dicarboxyl-2,2' bipyridine) in aqueous/ethanol solutions. The absorption spectra of N3 recorded at various pH values were analyzed by Single Value Decomposition techniques, followed by Global Fitting procedures, allowing us to identify four separate acid-base equilibria and their corresponding ground state pK(a) values. DFT/TDDFT calculations were performed for the N3 dye in solution, investigating the possible relevant species obtained by sequential deprotonation of the four dye carboxylic groups. TDDFT excited state calculations provided UV-vis absorption spectra which nicely agree with the experimental spectral shapes at various pH values. The calculated pK(a) values are also in good agreement with experimental data, within <1 pK(a) unit. Based on the calculated energy differences a tentative assignment of the N3 deprotonation pathway is reported.

  18. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission data from tests conducted on these vehicle configuration(s) at high altitude to calculate the fuel... values from the tests performed using alcohol or natural gas test fuel. (b) For each model type, as..., highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed...

  19. 40 CFR 600.208-08 - Calculation of FTP-based and HFET-based fuel economy values for a model type.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy values for a model type. 600.208-08 Section 600.208-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations fo...

  20. 40 CFR 600.206-08 - Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy values for vehicle configurations. 600.206-08 Section 600.206-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel...

  1. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (i) Calculate the 5-cycle city and highway fuel economy values from the tests performed using gasoline or diesel test fuel. (ii)(A) Calculate the 5-cycle city and highway fuel economy values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise...

  2. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  3. Performing a local barrier operation

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-03-04

    Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value.

  4. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  5. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... exhaust emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the city, highway, and combined fuel economy and carbon-related exhaust emission values from the tests performed using alcohol or natural gas test fuel. (b) If only one equivalent petroleum-based fuel economy...

  6. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600.208-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR...

  7. Iodine intake by adult residents of a farming area in Iwate Prefecture, Japan, and the accuracy of estimated iodine intake calculated using the Standard Tables of Food Composition in Japan.

    PubMed

    Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako

    2016-11-01

    Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.

  8. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  9. Adding glycaemic index and glycaemic load functionality to DietPLUS, a Malaysian food composition database and diet intake calculator.

    PubMed

    Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah

    2012-01-01

    This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.

  10. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  11. [Study on spectrum analysis of X-ray based on rotational mass effect in special relativity].

    PubMed

    Yu, Zhi-Qiang; Xie, Quan; Xiao, Qing-Quan

    2010-04-01

    Based on special relativity, the formation mechanism of characteristic X-ray has been studied, and the influence of rotational mass effect on X-ray spectrum has been given. A calculation formula of the X-ray wavelength based upon special relativity was derived. Error analysis was carried out systematically for the calculation values of characteristic wavelength, and the rules of relative error were obtained. It is shown that the values of the calculation are very close to the experimental values, and the effect of rotational mass effect on the characteristic wavelength becomes more evident as the atomic number increases. The result of the study has some reference meaning for the spectrum analysis of characteristic X-ray in application.

  12. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  13. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  14. Temperature measurement in a gas turbine engine combustor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul

    A method and system for determining a temperature of a working gas passing through a passage to a turbine section of a gas turbine engine. The method includes identifying an acoustic frequency at a first location in the engine upstream from the turbine section, and using the acoustic frequency for determining a first temperature value at the first location that is directly proportional to the acoustic frequency and a calculated constant value. A second temperature of the working gas is determined at a second location in the engine and, using the second temperature, a back calculation is performed to determinemore » a temperature value for the working gas at the first location. The first temperature value is compared to the back calculated temperature value to change the calculated constant value to a recalculated constant value. Subsequent first temperature values at the first location may be determined based on the recalculated constant value.« less

  15. Tree value system: users guide.

    Treesearch

    J.K. Ayer Sachet; D.G. Briggs; R.D. Fight

    1989-01-01

    This paper instructs resource analysts on use of the Tree Value System (TREEVAL). TREEVAL is a microcomputer system of programs for calculating tree or stand values and volumes based on predicted product recovery. Designed for analyzing silvicultural decisions, the system can also be used for appraisals and for evaluating log bucking. The system calculates results...

  16. 31 CFR 351.16 - What do I need to know about the base denomination for redemption value calculations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What do I need to know about the base denomination for redemption value calculations? 351.16 Section 351.16 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...

  17. 40 CFR 600.207-08 - Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of vehicle-specific 5-cycle-based fuel economy values for vehicle configurations. 600.207-08 Section 600.207-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fue...

  18. Comparison of primary zone combustor liner wall temperatures with calculated predictions

    NASA Technical Reports Server (NTRS)

    Norgren, C. T.

    1973-01-01

    Calculated liner temperatures based on a steady-state radiative and convective heat balance at the liner wall were compared with experimental values. Calculated liner temperatures were approximately 8 percent higher than experimental values. A radiometer was used to experimentally determine values of flame temperature and flame emissivity. Film cooling effectiveness was calculated from an empirical turbulent mixing expression assuming a turbulent mixing level of 2 percent. Liner wall temperatures were measured in a rectangular combustor segment 6 by 12 in. and tested at pressures up to 26.7 atm and inlet temperatures up to 922 K.

  19. Analysis of Acoustic Ambient Noise in Monterey Bay, California.

    DTIC Science & Technology

    1982-12-01

    6th line - 1/3-octave band levels, calculated from subroutines " Sub7 ", "Sub8", or "SubS" for center frequencies of 125 Hz (.213). and 25Q H z (.218) for...256 bins. (-191; calculates overall band levels for "corrected plots" based on analyzer scale selected (201); 77 ŕ Subroutines " Sub7 ", "SuhS", and...levels are calculated as positive values to be added to other values in eq. (3). vice negative values that would be subtracted)_ : " Sub7 ": calculates 1

  20. Electrostatic effects in unfolded staphylococcal nuclease

    PubMed Central

    Fitzkee, Nicholas C.; García-Moreno E, Bertrand

    2008-01-01

    Structure-based calculations of pK a values and electrostatic free energies of proteins assume that electrostatic effects in the unfolded state are negligible. In light of experimental evidence showing that this assumption is invalid for many proteins, and with increasing awareness that the unfolded state is more structured and compact than previously thought, a detailed examination of electrostatic effects in unfolded proteins is warranted. Here we address this issue with structure-based calculations of electrostatic interactions in unfolded staphylococcal nuclease. The approach involves the generation of ensembles of structures representing the unfolded state, and calculation of Coulomb energies to Boltzmann weight the unfolded state ensembles. Four different structural models of the unfolded state were tested. Experimental proton binding data measured with a variant of nuclease that is unfolded under native conditions were used to establish the validity of the calculations. These calculations suggest that weak Coulomb interactions are an unavoidable property of unfolded proteins. At neutral pH, the interactions are too weak to organize the unfolded state; however, at extreme pH values, where the protein has a significant net charge, the combined action of a large number of weak repulsive interactions can lead to the expansion of the unfolded state. The calculated pK a values of ionizable groups in the unfolded state are similar but not identical to the values in small peptides in water. These studies suggest that the accuracy of structure-based calculations of electrostatic contributions to stability cannot be improved unless electrostatic effects in the unfolded state are calculated explicitly. PMID:18227429

  1. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  2. WE-H-207A-07: Image-Based Versus Atlas-Based Internal Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fallahpoor, M; Abbasi, M; Parach, A

    Purpose: Monte Carlo (MC) simulation is known as the gold standard method for internal dosimetry. It requires radionuclide distribution from PET or SPECT and body structure from CT for accurate dose calculation. The manual or semi-automatic segmentation of organs from CT images is a major obstacle. The aim of this study is to compare the dosimetry results based on patient’s own CT and a digital humanoid phantom as an atlas with pre-specified organs. Methods: SPECT-CT images of a 50 year old woman who underwent bone pain palliation with Samarium-153 EDTMP for osseous metastases from breast cancer were used. The anatomicalmore » date and attenuation map were extracted from SPECT/CT and three XCAT digital phantoms with different BMIs (i.e. matched (38.8) and unmatched (35.5 and 36.7) with patient’s BMI that was 38.3). Segmentation of patient’s organs in CT image was performed using itk-SNAP software. GATE MC Simulator was used for dose calculation. Specific absorbed fractions (SAFs) and S-values were calculated for the segmented organs. Results: The differences between SAFs and S-values are high using different anatomical data and range from −13% to 39% for SAF values and −109% to 79% for S-values in different organs. In the spine, the clinically important target organ for Samarium Therapy, the differences in the S-values and SAF values are higher between XCAT phantom and CT when the phantom with identical BMI is employed (53.8% relative difference in S-value and 26.8% difference in SAF). However, the whole body dose values were the same between the calculations based on the CT and XCAT with different BMIs. Conclusion: The results indicated that atlas-based dosimetry using XCAT phantom even with matched BMI for patient leads to considerable errors as compared to image-based dosimetry that uses the patient’s own CT Patient-specific dosimetry using CT image is essential for accurate results.« less

  3. The pKa Cooperative: A Collaborative Effort to Advance Structure-Based Calculations of pKa values and Electrostatic Effects in Proteins

    PubMed Central

    Nielsen, Jens E.; Gunner, M. R.; Bertrand García-Moreno, E.

    2012-01-01

    The pKa Cooperative http://www.pkacoop.org was organized to advance development of accurate and useful computational methods for structure-based calculation of pKa values and electrostatic energy in proteins. The Cooperative brings together laboratories with expertise and interest in theoretical, computational and experimental studies of protein electrostatics. To improve structure-based energy calculations it is necessary to better understand the physical character and molecular determinants of electrostatic effects. The Cooperative thus intends to foment experimental research into fundamental aspects of proteins that depend on electrostatic interactions. It will maintain a depository for experimental data useful for critical assessment of methods for structure-based electrostatics calculations. To help guide the development of computational methods the Cooperative will organize blind prediction exercises. As a first step, computational laboratories were invited to reproduce an unpublished set of experimental pKa values of acidic and basic residues introduced in the interior of staphylococcal nuclease by site-directed mutagenesis. The pKa values of these groups are unique and challenging to simulate owing to the large magnitude of their shifts relative to normal pKa values in water. Many computational methods were tested in this 1st Blind Prediction Challenge and critical assessment exercise. A workshop was organized in the Telluride Science Research Center to assess objectively the performance of many computational methods tested on this one extensive dataset. This volume of PROTEINS: Structure, Function, and Bioinformatics introduces the pKa Cooperative, presents reports submitted by participants in the blind prediction challenge, and highlights some of the problems in structure-based calculations identified during this exercise. PMID:22002877

  4. Fundamental studies on kinetic isotope effect (KIE) of hydrogen isotope fractionation in natural gas systems

    USGS Publications Warehouse

    Ni, Y.; Ma, Q.; Ellis, G.S.; Dai, J.; Katz, B.; Zhang, S.; Tang, Y.

    2011-01-01

    Based on quantum chemistry calculations for normal octane homolytic cracking, a kinetic hydrogen isotope fractionation model for methane, ethane, and propane formation is proposed. The activation energy differences between D-substitute and non-substituted methane, ethane, and propane are 318.6, 281.7, and 280.2cal/mol, respectively. In order to determine the effect of the entropy contribution for hydrogen isotopic substitution, a transition state for ethane bond rupture was determined based on density function theory (DFT) calculations. The kinetic isotope effect (KIE) associated with bond rupture in D and H substituted ethane results in a frequency factor ratio of 1.07. Based on the proposed mathematical model of hydrogen isotope fractionation, one can potentially quantify natural gas thermal maturity from measured hydrogen isotope values. Calculated gas maturity values determined by the proposed mathematical model using ??D values in ethane from several basins in the world are in close agreement with similar predictions based on the ??13C composition of ethane. However, gas maturity values calculated from field data of methane and propane using both hydrogen and carbon kinetic isotopic models do not agree as closely. It is possible that ??D values in methane may be affected by microbial mixing and that propane values might be more susceptible to hydrogen exchange with water or to analytical errors. Although the model used in this study is quite preliminary, the results demonstrate that kinetic isotope fractionation effects in hydrogen may be useful in quantitative models of natural gas generation, and that ??D values in ethane might be more suitable for modeling than comparable values in methane and propane. ?? 2011 Elsevier Ltd.

  5. 40 CFR Appendix E to Part 63 - Monitoring Procedure for Nonthoroughly Mixed Open Biological Treatment Systems at Kraft Pulp...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... for each data set that is collected during the initial performance test. A single composite value of... Multiple Zone Concentrations Calculations Procedure based on inlet and outlet concentrations (Column A of... composite value of Ks discussed in section III.C of this appendix. This value of Ks is calculated during the...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özdemir, Semra Bayat; Demiralp, Metin

    The determination of the energy states is highly studied issue in the quantum mechanics. Based on expectation values dynamics, energy states can be observed. But conditions and calculations vary depending on the created system. In this work, a symmetric exponential anharmonic oscillator is considered and development of a recursive approximation method is studied to find its ground energy state. The use of majorant values facilitates the approximate calculation of expectation values.

  7. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  8. Dose equivalent rate constants and barrier transmission data for nuclear medicine facility dose calculations and shielding design.

    PubMed

    Kusano, Maggie; Caldwell, Curtis B

    2014-07-01

    A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.

  9. Calculated quantum yield of photosynthesis of phytoplankton in the Marine Light-Mixed Layers (59 deg N, 21 deg W)

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.

    1995-01-01

    The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.

  10. Development of a nuclear technique for monitoring water levels in pressurized vehicles

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Davis, W. T.; Mall, G. H.

    1983-01-01

    A new technique for monitoring water levels in pressurized stainless steel cylinders was developed. It is based on differences in attenuation coefficients of water and air for Cs137 (662 keV) gamma rays. Experimentally observed gamma ray counting rates with and without water in model reservoir cylinder were compared with corresponding calculated values for two different gamma ray detection theshold energies. Calculated values include the effects of multiple scattering and attendant gamma ray energy reductions. The agreement between the measured and calculated values is reasonably good. Computer programs for calculating angular and spectral distributions of scattered radition in various media are included.

  11. Regional potential evapotranspiration in arid climates based on temperature, topography and calculated solar radiation

    NASA Astrophysics Data System (ADS)

    Shevenell, Lisa

    1999-03-01

    Values of evapotranspiration are required for a variety of water planning activities in arid and semi-arid climates, yet data requirements are often large, and it is costly to obtain this information. This work presents a method where a few, readily available data (temperature, elevation) are required to estimate potential evapotranspiration (PET). A method using measured temperature and the calculated ratio of total to vertical radiation (after the work of Behnke and Maxey, 1969) to estimate monthly PET was applied for the months of April-October and compared with pan evaporation measurements. The test area used in this work was in Nevada, which has 124 weather stations that record sufficient amounts of temperature data. The calculated PET values were found to be well correlated (R2=0·940-0·983, slopes near 1·0) with mean monthly pan evaporation measurements at eight weather stations.In order to extrapolate these calculated PET values to areas without temperature measurements and to sites at differing elevations, the state was divided into five regions based on latitude, and linear regressions of PET versus elevation were calculated for each of these regions. These extrapolated PET values generally compare well with the pan evaporation measurements (R2=0·926-0·988, slopes near 1·0). The estimated values are generally somewhat lower than the pan measurements, in part because the effects of wind are not explicitly considered in the calculations, and near-freezing temperatures result in a calculated PET of zero at higher elevations in the spring months. The calculated PET values for April-October are 84-100% of the measured pan evaporation values. Using digital elevation models in a geographical information system, calculated values were adjusted for slope and aspect, and the data were used to construct a series of maps of monthly PET. The resultant maps show a realistic distribution of regional variations in PET throughout Nevada which inversely mimics topography. The general methods described here could be used to estimate regional PET in other arid western states (e.g. New Mexico, Arizona, Utah) and arid regions world-wide (e.g. parts of Africa).

  12. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. EuroFIR Guideline on calculation of nutrient content of foods for food business operators.

    PubMed

    Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul

    2018-01-01

    This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.

  14. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  15. Determination of safety distance limits for a human near a cellular base station antenna, adopting the IEEE standard or ICNIRP guidelines.

    PubMed

    Cooper, Justin; Marx, Bernd; Buhl, Johannes; Hombach, Volker

    2002-09-01

    This paper investigates the minimum distance for a human body in the near field of a cellular telephone base station antenna for which there is compliance with the IEEE or ICNIRP threshold values for radio frequency electromagnetic energy absorption in the human body. First, local maximum specific absorption rates (SARs), measured and averaged over volumes equivalent to 1 and to 10 g tissue within the trunk region of a physical, liquid filled shell phantom facing and irradiated by a typical GSM 900 base station antenna, were compared to corresponding calculated SAR values. The calculation used a homogeneous Visible Human body model in front of a simulated base station antenna of the same type. Both real and simulated base station antennas operated at 935 MHz. Antenna-body distances were between 1 and 65 cm. The agreement between measurements and calculations was excellent. This gave confidence in the subsequent calculated SAR values for the heterogeneous Visible Human model, for which each tissue was assigned the currently accepted values for permittivity and conductivity at 935 MHz. Calculated SAR values within the trunk of the body were found to be about double those for the homogeneous case. When the IEEE standard and the ICNIRP guidelines are both to be complied with, the local SAR averaged over 1 g tissue was found to be the determining parameter. Emitted power values from the antenna that produced the maximum SAR value over 1 g specified in the IEEE standard at the base station are less than those needed to reach the ICNIRP threshold specified for the local SAR averaged over 10 g. For the GSM base station antenna investigated here operating at 935 MHz with 40 W emitted power, the model indicates that the human body should not be closer to the antenna than 18 cm for controlled environment exposure, or about 95 cm for uncontrolled environment exposure. These safe distance limits are for SARs averaged over 1 g tissue. The corresponding safety distance limits under the ICNIRP guidelines for SAR taken over 10 g tissue are 5 cm for occupational exposure and about 75 cm for general-public exposure. Copyright 2002 Wiley-Liss, Inc.

  16. WE-AB-BRA-06: 4DCT-Ventilation: A Novel Imaging Modality for Thoracic Surgical Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Y; Jackson, M; Schubert, L

    Purpose: The current standard-of-care imaging used to evaluate lung cancer patients for surgical resection is nuclear-medicine ventilation. Surgeons use nuclear-medicine images along with pulmonary function tests (PFT) to calculate percent predicted postoperative (%PPO) PFT values by estimating the amount of functioning lung that would be lost with surgery. 4DCT-ventilation is an emerging imaging modality developed in radiation oncology that uses 4DCT data to calculate lung ventilation maps. We perform the first retrospective study to assess the use of 4DCT-ventilation for pre-operative surgical evaluation. The purpose of this work was to compare %PPO-PFT values calculated with 4DCT-ventilation and nuclear-medicine imaging. Methods:more » 16 lung cancer patients retrospectively reviewed had undergone 4DCTs, nuclear-medicine imaging, and had Forced Expiratory Volume in 1 second (FEV1) acquired as part of a standard PFT. For each patient, 4DCT data sets, spatial registration, and a density-change based model were used to compute 4DCT-ventilation maps. Both 4DCT and nuclear-medicine images were used to calculate %PPO-FEV1 using %PPO-FEV1=pre-operative FEV1*(1-fraction of total ventilation of resected lung). Fraction of ventilation resected was calculated assuming lobectomy and pneumonectomy. The %PPO-FEV1 values were compared between the 4DCT-ventilation-based calculations and the nuclear-medicine-based calculations using correlation coefficients and average differences. Results: The correlation between %PPO-FEV1 values calculated with 4DCT-ventilation and nuclear-medicine were 0.81 (p<0.01) and 0.99 (p<0.01) for pneumonectomy and lobectomy respectively. The average difference between the 4DCT-ventilation based and the nuclear-medicine-based %PPO-FEV1 values were small, 4.1±8.5% and 2.9±3.0% for pneumonectomy and lobectomy respectively. Conclusion: The high correlation results provide a strong rationale for a clinical trial translating 4DCT-ventilation to the surgical domain. Compared to nuclear-medicine, 4DCT-ventilation is cheaper, does not require a radioactive contrast agent, provides a faster imaging procedure, and has improved spatial resolution. 4DCT-ventilation can reduce the cost and imaging time for patients while providing improved spatial accuracy and quantitative results for surgeons. YV discloses grant from State of Colorado.« less

  17. Promising thermoelectric properties of phosphorenes.

    PubMed

    Sevik, Cem; Sevinçli, Hâldun

    2016-09-02

    Electronic, phononic, and thermoelectric transport properties of single layer black- and blue-phosphorene structures are investigated with first-principles based ballistic electron and phonon transport calculations employing hybrid functionals. The maximum values of room temperature thermoelectric figure of merit, ZT corresponding to armchair and zigzag directions of black-phosphorene, ∼0.5 and ∼0.25, are calculated as rather smaller than those obtained with first-principles based semiclassical Boltzmann transport theory calculations. On the other hand, the maximum value of room temperature ZT of blue-phosphorene is predicted to be substantially high and remarkable values as high as 2.5 are obtained for elevated temperatures. Besides the fact that these figures are obtained at the ballistic limit, our findings mark the strong possibility of high thermoelectric performance of blue-phosphorene in new generation thermoelectric applications.

  18. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...

  19. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...

  20. 40 CFR 600.207-12 - Calculation and use of vehicle-specific 5-cycle-based fuel economy and CO2 emission values for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...

  1. VHDL-AMS modelling and simulation of a planar electrostatic micromotor

    NASA Astrophysics Data System (ADS)

    Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.

    2003-09-01

    System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.

  2. Stock price prediction using geometric Brownian motion

    NASA Astrophysics Data System (ADS)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  3. 13C and (15)N chemical shift tensors in adenosine, guanosine dihydrate, 2'-deoxythymidine, and cytidine.

    PubMed

    Stueber, Dirk; Grant, David M

    2002-09-04

    The (13)C and (15)N chemical shift tensor principal values for adenosine, guanosine dihydrate, 2'-deoxythymidine, and cytidine are measured on natural abundance samples. Additionally, the (13)C and (15)N chemical shielding tensor principal values in these four nucleosides are calculated utilizing various theoretical approaches. Embedded ion method (EIM) calculations improve significantly the precision with which the experimental principal values are reproduced over calculations on the corresponding isolated molecules with proton-optimized geometries. The (13)C and (15)N chemical shift tensor orientations are reliably assigned in the molecular frames of the nucleosides based upon chemical shielding tensor calculations employing the EIM. The differences between principal values obtained in EIM calculations and in calculations on isolated molecules with proton positions optimized inside a point charge array are used to estimate the contributions to chemical shielding arising from intermolecular interactions. Moreover, the (13)C and (15)N chemical shift tensor orientations and principal values correlate with the molecular structure and the crystallographic environment for the nucleosides and agree with data obtained previously for related compounds. The effects of variations in certain EIM parameters on the accuracy of the shielding tensor calculations are investigated.

  4. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  5. The Role of Economic Uncertainty on the Block Economic Value - a New Valuation Approach / Rola Czynnika Niepewności Przy Obliczaniu Wskaźnika Rentowności - Nowe Podejście

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Ataee-Pour, M.

    2012-12-01

    The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.

  6. Calculation of the acid-base equilibrium constants at the alumina/electrolyte interface from the ph dependence of the adsorption of singly charged ions (Na+, Cl-)

    NASA Astrophysics Data System (ADS)

    Gololobova, E. G.; Gorichev, I. G.; Lainer, Yu. A.; Skvortsova, I. V.

    2011-05-01

    A procedure was proposed for the calculation of the acid-base equilibrium constants at an alumina/electrolyte interface from experimental data on the adsorption of singly charged ions (Na+, Cl-) at various pH values. The calculated constants (p K {1/0}= 4.1, p K {2/0}= 11.9, p K {3/0}= 8.3, and p K {4/0}= 7.7) are shown to agree with the values obtained from an experimental pH dependence of the electrokinetic potential and the results of potentiometric titration of Al2O3 suspensions.

  7. 31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for book-entry Series I savings bonds? 359.55 Section 359.55 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES I Book-Entry Series I Savings Bonds § 359.55 How are redemption values calculated for book-entry Series I savings bonds? We base current redemption...

  8. 31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...

  9. 31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...

  10. Use of petroleum-based correlations and estimation methods for synthetic fuels

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1980-01-01

    Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.

  11. An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines

    NASA Technical Reports Server (NTRS)

    Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng

    2014-01-01

    We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.

  12. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  13. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkan, Nureddin

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less

  14. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE... constructed value as the basis for normal value where: neither the home market nor a third country market is... a fictitious market are disregarded; no contemporaneous sales of comparable merchandise are...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doma, S. B., E-mail: sbdoma@alexu.edu.eg; Shaker, M. O.; Farag, A. M.

    The variational Monte Carlo method is applied to investigate the ground state and some excited states of the lithium atom and its ions up to Z = 10 in the presence of an external magnetic field regime with γ = 0–100 arb. units. The effect of increasing field strength on the ground state energy is studied and precise values for the crossover field strengths were obtained. Our calculations are based on using accurate forms of trial wave functions, which were put forward in calculating energies in the absence of magnetic field. Furthermore, the value of Y at which ground-state energymore » of the lithium atom approaches to zero was calculated. The obtained results are in good agreement with the most recent values and also with the exact values.« less

  16. [Gas Concentration Measurement Based on the Integral Value of Absorptance Spectrum].

    PubMed

    Liu, Hui-jun; Tao, Shao-hua; Yang, Bing-chu; Deng, Hong-gui

    2015-12-01

    The absorptance spectrum of a gas is the basis for the qualitative and quantitative analysis of the gas by the law of the Lambert-Beer. The integral value of the absorptance spectrum is an important parameter to describe the characteristics of the gas absorption. Based on the measured absorptance spectrum of a gas, we collected the required data from the database of HIT-RAN, and chose one of the spectral lines and calculated the integral value of the absorptance spectrum in the frequency domain, and then substituted the integral value into Lambert-Beer's law to obtain the concentration of the detected gas. By calculating the integral value of the absorptance spectrum we can avoid the more complicated calculation of the spectral line function and a series of standard gases for calibration, so the gas concentration measurement will be simpler and faster. We studied the changing trends of the integral values of the absorptance spectrums versus temperature. Since temperature variation would cause the corresponding variation in pressure, we studied the changing trends of the integral values of the absorptance spectrums versus both the pressure not changed with temperature and changed with the temperature variation. Based on the two cases, we found that the integral values of the absorptance spectrums both would firstly increase, then decrease, and finally stabilize with temperature increasing, but the ranges of specific changing trend were different in the two cases. In the experiments, we found that the relative errors of the integrated values of the absorptance spectrum were much higher than 1% and still increased with temperature when we only considered the change of temperature and completely ignored the pressure affected by the temperature variation, and the relative errors of the integrated values of the absorptance spectrum were almost constant at about only 1% when we considered that the pressure were affected by the temperature variation. As the integral value of the absorptance spectrum varied with temperature and the calculating error for the integral value fluctuates with ranges of temperature, in the gas measurement when we usd integral values of the absoptance spectrum, we should select a suitable temperature variation and obtain a more accurate measurement result.

  17. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    PubMed

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.

  18. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  19. 40 CFR 600.206-12 - Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation and use of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values for vehicle configurations. 600.206-12 Section 600.206-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST...

  20. Theoretical Evaluation of Electromagnetic Emissions from GSM900 Mobile Telephony Base Stations in the West Bank and Gaza Strip-Palestine.

    PubMed

    Lahham, Adnan; Alkbash, Jehad Abu; ALMasri, Hussien

    2017-04-20

    Theoretical assessments of power density in far-field conditions were used to evaluate the levels of environmental electromagnetic frequencies from selected GSM900 macrocell base stations in the West Bank and Gaza Strip. Assessments were based on calculating the power densities using commercially available software (RF-Map from Telstra Research Laboratories-Australia). Calculations were carried out for single base stations with multiantenna systems and also for multiple base stations with multiantenna systems at 1.7 m above the ground level. More than 100 power density levels were calculated at different locations around the investigated base stations. These locations include areas accessible to the general public (schools, parks, residential areas, streets and areas around kindergartens). The maximum calculated electromagnetic emission level resulted from a single site was 0.413 μW cm-2 and found at Hizma town near Jerusalem. Average maximum power density from all single sites was 0.16 μW cm-2. The results of all calculated power density levels in 100 locations distributed over the West Bank and Gaza were nearly normally distributed with a peak value of ~0.01% of the International Commission on Non-Ionizing Radiation Protection's limit recommended for general public. Comparison between calculated and experimentally measured value of maximum power density from a base station showed that calculations overestimate the actual measured power density by ~27%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Metrix Matrix: A Cloud-Based System for Tracking Non-Relative Value Unit Value-Added Work Metrics.

    PubMed

    Kovacs, Mark D; Sheafor, Douglas H; Thacker, Paul G; Hardie, Andrew D; Costello, Philip

    2018-03-01

    In the era of value-based medicine, it will become increasingly important for radiologists to provide metrics that demonstrate their value beyond clinical productivity. In this article the authors describe their institution's development of an easy-to-use system for tracking value-added but non-relative value unit (RVU)-based activities. Metrix Matrix is an efficient cloud-based system for tracking value-added work. A password-protected home page contains links to web-based forms created using Google Forms, with collected data populating Google Sheets spreadsheets. Value-added work metrics selected for tracking included interdisciplinary conferences, hospital committee meetings, consulting on nonbilled outside studies, and practice-based quality improvement. Over a period of 4 months, value-added work data were collected for all clinical attending faculty members in a university-based radiology department (n = 39). Time required for data entry was analyzed for 2 faculty members over the same time period. Thirty-nine faculty members (equivalent to 36.4 full-time equivalents) reported a total of 1,223.5 hours of value-added work time (VAWT). A formula was used to calculate "value-added RVUs" (vRVUs) from VAWT. VAWT amounted to 5,793.6 vRVUs or 6.0% of total work performed (vRVUs plus work RVUs [wRVUs]). Were vRVUs considered equivalent to wRVUs for staffing purposes, this would require an additional 2.3 full-time equivalents, on the basis of average wRVU calculations. Mean data entry time was 56.1 seconds per day per faculty member. As health care reimbursement evolves with an emphasis on value-based medicine, it is imperative that radiologists demonstrate the value they add to patient care beyond wRVUs. This free and easy-to-use cloud-based system allows the efficient quantification of value-added work activities. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  2. Environmental flow allocation and statistics calculator

    USGS Publications Warehouse

    Konrad, Christopher P.

    2011-01-01

    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  3. Equation of state of detonation products based on statistical mechanical theory

    NASA Astrophysics Data System (ADS)

    Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng

    2015-06-01

    The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.

  4. Equation of state of detonation products based on statistical mechanical theory

    NASA Astrophysics Data System (ADS)

    Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team

    2013-06-01

    The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.

  5. A New Algorithm to Optimize Maximal Information Coefficient

    PubMed Central

    Luo, Feng; Yuan, Zheming

    2016-01-01

    The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001

  6. 7 CFR 1437.301 - Value loss.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Value loss. 1437.301 Section 1437.301 Agriculture... Coverage Using Value § 1437.301 Value loss. (a) Special provisions are required to assess losses and.... Assistance for these commodities is calculated based on the loss of value at the time of disaster. The agency...

  7. 40 CFR 600.210-12 - Calculation of fuel economy and CO2 emission values for labeling.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... not qualify for the second method as described in § 600.115 (other than electric vehicles). The second... values for electric vehicles. Determine FTP-based city and HFET-based highway fuel economy label values for electric vehicles as described in § 600.116. Convert W-hour/mile results to miles per kW-hr and...

  8. 40 CFR 600.210-12 - Calculation of fuel economy and CO2 emission values for labeling.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... not qualify for the second method as described in § 600.115 (other than electric vehicles). The second... values for electric vehicles. Determine FTP-based city and HFET-based highway fuel economy label values for electric vehicles as described in § 600.116. Convert W-hour/mile results to miles per kW-hr and...

  9. 40 CFR 600.210-12 - Calculation of fuel economy and CO2 emission values for labeling.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... not qualify for the second method as described in § 600.115 (other than electric vehicles). The second... values for electric vehicles. Determine FTP-based city and HFET-based highway fuel economy label values for electric vehicles as described in § 600.116. Convert W-hour/mile results to miles per kW-hr and...

  10. Development of an efficient procedure for calculating the aerodynamic effects of planform variation

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Geller, E. W.

    1981-01-01

    Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.

  11. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  12. Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik

    2015-09-15

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less

  13. Determination of the Optimal Fourier Number on the Dynamic Thermal Transmission

    NASA Astrophysics Data System (ADS)

    Bruzgevičius, P.; Burlingis, A.; Norvaišienė, R.

    2016-12-01

    This article represents the result of experimental research on transient heat transfer in a multilayered (heterogeneous) wall. Our non-steady thermal transmission simulation is based on a finite-difference calculation method. The value of a Fourier number shows the similarity of thermal variation in conditional layers of an enclosure. Most scientists recommend using no more than a value of 0.5 for the Fourier number when performing calculations on dynamic (transient) heat transfer. The value of the Fourier number is determined in order to acquire reliable calculation results with optimal accuracy. To compare the results of simulation with experimental research, a transient heat transfer calculation spreadsheet was created. Our research has shown that a Fourier number of around 0.5 or even 0.32 is not sufficient ({≈ }17 % of oscillation amplitude) for calculations of transient heat transfer in a multilayered wall. The least distorted calculation results were obtained when the multilayered enclosure was divided into conditional layers with almost equal Fourier number values and when the value of the Fourier number was around 1/6, i.e., approximately 0.17. Statistical deviation analysis using the Statistical Analysis System was applied to assess the accuracy of the spreadsheet calculation and was developed on the basis of our established methodology. The mean and median absolute error as well as their confidence intervals has been estimated by the two methods with optimal accuracy ({F}_{oMDF}= 0.177 and F_{oEPS}= 0.1633 values).

  14. Internal dosimetry through GATE simulations of preclinical radiotherapy using a melanin-targeting ligand

    NASA Astrophysics Data System (ADS)

    Perrot, Y.; Degoul, F.; Auzeloux, P.; Bonnet, M.; Cachin, F.; Chezal, J. M.; Donnarieix, D.; Labarre, P.; Moins, N.; Papon, J.; Rbah-Vidal, L.; Vidal, A.; Miot-Noirault, E.; Maigne, L.

    2014-05-01

    The GATE Monte Carlo simulation platform based on the Geant4 toolkit is under constant improvement for dosimetric calculations. In this study, we explore its use for the dosimetry of the preclinical targeted radiotherapy of melanoma using a new specific melanin-targeting radiotracer labeled with iodine 131. Calculated absorbed fractions and S values for spheres and murine models (digital and CT-scan-based mouse phantoms) are compared between GATE and EGSnrc Monte Carlo codes considering monoenergetic electrons and the detailed energy spectrum of iodine 131. The behavior of Geant4 standard and low energy models is also tested. Following the different authors’ guidelines concerning the parameterization of electron physics models, this study demonstrates an agreement of 1.2% and 1.5% with EGSnrc, respectively, for the calculation of S values for small spheres and mouse phantoms. S values calculated with GATE are then used to compute the dose distribution in organs of interest using the activity distribution in mouse phantoms. This study gives the dosimetric data required for the translation of the new treatment to the clinic.

  15. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  16. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    PubMed

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  17. SU-F-BRA-10: Fricke Dosimetry: Determination of the G-Value for Ir-192 Energy Based On the NRC Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salata, C; David, M; Rosado, P

    Purpose: Use the methodology developed by the National Research Council Canada (NRC), for Fricke Dosimetry, to determine the G-value used at Ir-192 energies. Methods: In this study the Radiology Science Laboratory of Rio de Janeiro State University (LCR),based the G-value determination on the NRC method, using polyethylene bags. Briefly, this method consists of interpolating the G-values calculated for Co-60 and 250 kV x-rays for the average energy of Ir-192 (380 keV). As the Co-60 G-value is well described at literature, and associated with low uncertainties, it wasn’t measured in this present study. The G-values for 150 kV (Effective energy ofmore » 68 keV), 250 kV (Effective energy of 132 keV)and 300 kV(Effective energy of 159 keV)were calculated using the air kerma given by a calibrated ion chamber, and making it equivalent to the absorbed to the Fricke solution, using a Monte Carlo calculated factor for this conversion. Instead of interpolations, as described by the NRC, we displayed the G-values points in a graph, and used the line equation to determine the G- value for Ir-192 (380 keV). Results: The measured G-values were 1.436 ± 0.002 µmol/J for 150 kV, 1.472 ± 0.002 µmol/J for 250 kV, 1.497 ± 0.003 µmol/J for 300 kV. The used G-value for Co-60 (1.25 MeV) was 1,613 µmol/J. The R-square of the fitted regression line among those G-value points was 0.991. Using the line equation, the calculate G-value for 380 KeV was 1.542 µmol/J. Conclusion: The Result found for Ir-192 G-value is 3,1% different (lower) from the NRC value. But it agrees with previous literature results, using different methodologies to calculate this parameter. We will continue this experiment measuring the G-value for Co-60 in order to compare with the NRC method and better understand the reasons for the found differences.« less

  18. An Approach for Calculating Student-Centered Value in Education – A Link between Quality, Efficiency, and the Learning Experience in the Health Professions

    PubMed Central

    Ooi, Caryn; Reeves, Scott; Walsh, Kieran

    2016-01-01

    Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was ‘worth’ what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78) was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year’s format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome—Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%). Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value to the student may be used as a way of evaluating effective change in a modern health professional curriculum. This could extend to calculate total value, which would incorporate the economic implications for the educational providers. Further research is required for validation of this outcome. PMID:27632427

  19. An Approach for Calculating Student-Centered Value in Education - A Link between Quality, Efficiency, and the Learning Experience in the Health Professions.

    PubMed

    Nicklen, Peter; Rivers, George; Ooi, Caryn; Ilic, Dragan; Reeves, Scott; Walsh, Kieran; Maloney, Stephen

    2016-01-01

    Health professional education is experiencing a cultural shift towards student-centered education. Although we are now challenging our traditional training methods, our methods for evaluating the impact of the training on the learner remains largely unchanged. What is not typically measured is student-centered value; whether it was 'worth' what the learner paid. The primary aim of this study was to apply a method of calculating student-centered value, applied to the context of a change in teaching methods within a health professional program. This study took place over the first semester of the third year of the Bachelor of Physiotherapy at Monash University, Victoria, Australia, in 2014. The entire third year cohort (n = 78) was invited to participate. Survey based design was used to collect the appropriate data. A blended learning model was implemented; subsequently students were only required to attend campus three days per week, with the remaining two days comprising online learning. This was compared to the previous year's format, a campus-based face-to-face approach where students attended campus five days per week, with the primary outcome-Value to student. Value to student incorporates, user costs associated with transportation and equipment, the amount of time saved, the price paid and perceived gross benefit. Of the 78 students invited to participate, 76 completed the post-unit survey (non-participation rate 2.6%). Based on Value to student the blended learning approach provided a $1,314.93 net benefit to students. Another significant finding was that the perceived gross benefit for the blended learning approach was $4014.84 compared to the campus-based face-to-face approach of $3651.72, indicating that students would pay more for the blended learning approach. This paper successfully applied a novel method of calculating student-centered value. This is the first step in validating the value to student outcome. Measuring economic value to the student may be used as a way of evaluating effective change in a modern health professional curriculum. This could extend to calculate total value, which would incorporate the economic implications for the educational providers. Further research is required for validation of this outcome.

  20. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  1. 10 CFR 431.304 - Uniform test method for the measurement of energy consumption of walk-in coolers and walk-in...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... machines. (b) Testing and Calculations. (1) [Reserved] (2) The R value shall be the 1/K factor multiplied by the thickness of the panel. (3) The K factor shall be based on ASTM C518 (incorporated by reference; see § 431.303). (4) For calculating the R value for freezers, the K factor of the foam at 20...

  2. 10 CFR 431.304 - Uniform test method for the measurement of energy consumption of walk-in coolers and walk-in...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... machines. (b) Testing and Calculations. (1) [Reserved] (2) The R value shall be the 1/K factor multiplied by the thickness of the panel. (3) The K factor shall be based on ASTM C518 (incorporated by reference; see § 431.303). (4) For calculating the R value for freezers, the K factor of the foam at 20...

  3. Virial Coefficients for the Liquid Argon

    NASA Astrophysics Data System (ADS)

    Korth, Micheal; Kim, Saesun

    2014-03-01

    We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.

  4. Determination of noise equivalent reflectance for a multispectral scanner: A scanner sensitivity study

    NASA Technical Reports Server (NTRS)

    Gibbons, D. E.; Richard, R. R.

    1979-01-01

    The methods used to calculate the sensitivity parameter noise equivalent reflectance of a remote-sensing scanner are explored, and the results are compared with values measured over calibrated test sites. Data were acquired on four occasions covering a span of 4 years and providing various atmospheric conditions. One of the calculated values was based on assumed atmospheric conditions, whereas two others were based on atmospheric models. Results indicate that the assumed atmospheric conditions provide useful answers adequate for many purposes. A nomograph was developed to indicate sensitivity variations due to geographic location, time of day, and season.

  5. Measurement-based model of a wide-bore CT scanner for Monte Carlo dosimetric calculations with GMCTdospp software.

    PubMed

    Skrzyński, Witold

    2014-11-01

    The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Green Infrastructure Tool | EPA Center for Exposure ...

    EPA Pesticide Factsheets

    2016-03-07

    Units option added – SI or US units. Default option is US units Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 5 Previously a default Cd value was used for calculations (under-drain and riser) but now a user-defined value option is given Conversion options added wherever necessary Default values of suction head and hydraulic conductivity are changed based on units selected in infiltration panel Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6. Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment

  7. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant

    ERIC Educational Resources Information Center

    Beach, Darrell H.

    1969-01-01

    Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)

  8. Strategy of investment in electricity sources--Market value of a power plant and the electricity market

    NASA Astrophysics Data System (ADS)

    Bartnik, R.; Hnydiuk-Stefan, A.; Buryn, Z.

    2017-11-01

    This paper reports the results of the investment strategy analysis in different electricity sources. New methodology and theory of calculating the market value of the power plant and value of the electricity market supplied by it are presented. The financial gain forms the most important criteria in the assessment of an investment by an investor. An investment strategy has to involve a careful analysis of each considered project in order that the right decision and selection will be made while various components of the projects will be considered. The latter primarily includes the aspects of risk and uncertainty. Profitability of an investment in the electricity sources (as well as others) is offered by the measures applicable for the assessment of the economic effectiveness of an investment based on calculations e.g. power plant market value and the value of the electricity that is supplied by a power plant. The values of such measures decide on an investment strategy in the energy sources. This paper contains analysis of exemplary calculations results of power plant market value and the electricity market value supplied by it.

  9. Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts

    PubMed Central

    Wang, Xianlong; Wang, Chengfei; Zhao, Hui

    2012-01-01

    Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134

  10. Appraisal of application possibilities of smoothed splines to designation of the average values of terrain curvatures measured after the termination of hard coal exploitation conducted at medium depth

    NASA Astrophysics Data System (ADS)

    Orwat, J.

    2018-01-01

    In paper were presented results of average values calculations of terrain curvatures measured after the termination of subsequent exploitation stages in the 338/2 coal bed located at medium depth. The curvatures were measured on the neighbouring segments of measuring line No. 1 established perpendicularly to the runways of four longwalls No. 001, 002, 005 and 007. The average courses of measured curvatures were designated based on average courses of measured inclinations. In turn, the average values of observed inclinations were calculated on the basis of measured subsidence average values. In turn, they were designated on the way of average-square approximation, which was done by the use of smoothed splines, in reference to the theoretical courses determined by the S. Knothe’s and J. Bialek’s formulas. Here were used standard parameters values of a roof rocks subsidence a, an exploitation rim Aobr and an angle of the main influences range β. The values of standard deviations between the average and measured curvatures σC and the variability coefficients of random scattering of curvatures MC were calculated. They were compared with values appearing in the literature and based on this, a possibility appraisal of the use of smooth splines to designation of average course of observed curvatures of mining area was conducted.

  11. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  12. Exposure of farm workers to electromagnetic radiation from cellular network radio base stations situated on rural agricultural land.

    PubMed

    Pascuzzi, Simone; Santoro, Francesco

    2015-01-01

    The electromagnetic field (EMF) levels generated by mobile telephone radio base stations (RBS) situated on rural-agricultural lands were assessed in order to evaluate the exposure of farm workers in the surrounding area. The expected EMF at various distances from a mobile telephone RBS was calculated using an ad hoc numerical forecast model. Subsequently, the electric fields around some RBS on agricultural lands were measured, in order to obtain a good approximation of the effective conditions at the investigated sites. The viability of this study was tested according to the Italian Regulations concerning general and occupational public exposure to time-varying EMFs. The calculated E-field values were obtained with the RBS working constantly at full power, but during the in situ measurements the actual power emitted by RBS antennas was lower than the maximum level, and the E-field values actually registered were much lower than the calculated values.

  13. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    NASA Astrophysics Data System (ADS)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.

  14. [Quantitative evaluation of Gd-EOB-DTPA uptake in phantom study for liver MRI].

    PubMed

    Hayashi, Norio; Miyati, Tosiaki; Koda, Wataru; Suzuki, Masayuki; Sanada, Shigeru; Ohno, Naoki; Hamaguchi, Takashi; Matsuura, Yukihiro; Kawahara, Kazuhiro; Yamamoto, Tomoyuki; Matsui, Osamu

    2010-05-20

    Gd-EOB-DTPA is a new liver specific MRI contrast media. In the hepatobiliary phase, contrast media is trapped in normal liver tissue, a normal liver shows high intensity, tumor/liver contrast becomes high, and diagnostic ability improves. In order to indicate the degree of uptake of the contrast media, the enhancement ratio (ER) is calculated. The ER is obtained by calculating (signal intensity (SI) after injection-SI before injection) / SI before injection. However, because there is no linearity between contrast media concentration and SI, ER is not correctly estimated by this method. We discuss a method of measuring ER based on SI and T(1) values using the phantom. We used a column phantom, with an internal diameter of 3 cm, that was filled with Gd-EOB-DTPA diluted solution. Moreover, measurement of the T(1) value by the IR method was also performed. The ER measuring method of this technique consists of the following three components: 1) Measurement of ER based on differences in 1/T(1) values using the variable flip angle (FA) method, 2) Measurement of differences in SI, and 3) Measurement of differences in 1/T(1) values using the IR method. ER values calculated by these three methods were compared. In measurement made using the variable FA method and the IR method, linearity was found between contrast media concentration and ER. On the other hand, linearity was not found between contrast media concentration and SI. For calculation of ER using Gd-EOB-DTPA, a more correct ER is obtained by measuring the T(1) value using the variable FA method.

  15. Online irrigation service for fruit und vegetable crops at farmers site

    NASA Astrophysics Data System (ADS)

    Janssen, W.

    2009-09-01

    Online irrigation service for fruit und vegetable crops at farmers site by W. Janssen, German Weather Service, 63067 Offenbach Agrowetter irrigation advice is a product which calculates the present soil moisture as well as the soil moisture to be expected over the next 5 days for over 30 different crops. It's based on a water balance model and provides targeted recommendations for irrigation. Irrigation inputs according to the soil in order to avoid infiltration and, as a consequence thereof, the undesired movement of nitrate and plant protectants into the groundwater. This interactive 'online system' takes into account the user's individual circumstances such as crop and soil characteristics and the precipitation and irrigation amounts at the user's site. Each user may calculate up to 16 different enquiries simultaneously (different crops or different emergence dates). The user can calculate the individual soil moistures for his fields with a maximum effort of 5 minutes per week only. The sources of water are precipitation and irrigation whereas water losses occur due to evapotranspiration and infiltration of water into the ground. The evapotranspiration is calculated by multiplying a reference evapotranspiration (maximum evapotranspiration over grass) with the so-called crop coefficients (kc values) that have been developed by the Geisenheim Research Centre, Vegetable Crops Branch. Kc values depending on the crop and the individual plant development stage. The reference evapotranspiration is calculated from a base weather station user has chosen (out of around 500 weather stations) using Penman method based on daily values. After chosen a crop and soil type the user must manually enter the precipitation data measured at the site, the irrigation water inputs and the dates for a few phenological stages. Economical aspects can be considered by changing the values of soil moisture from which recommendations for irrigation start from optimal to necessary plant supply. Previous comparative measurements carried out by the Agricultural Administration of Baden-Württemberg relating to potatoes, onions, vine stocks, and strawberries agreed very well with the calculations.

  16. Functional design specification: NASA form 1510

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The 1510 worksheet used to calculate approved facility project cost estimates is explained. Topics covered include data base considerations, program structure, relationship of the 1510 form to the 1509 form, and functions which the application must perform: WHATIF, TENENTER, TENTYPE, and data base utilities. A sample NASA form 1510 printout and a 1510 data dictionary are presented in the appendices along with the cost adjustment table, the floppy disk index, and methods for generating the calculated values (TENCALC) and for calculating cost adjustment (CONSTADJ). Storage requirements are given.

  17. PVWatts ® Calculator: India (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The PVWatts ® Calculator for India was released by the National Renewable Energy Laboratory in 2013. The online tool estimates electricity production and the monetary value of that production of grid-connected roof- or ground-mounted crystalline silicon photovoltaics systems based on a few simple inputs. This factsheet provides a broad overview of the PVWatts ® Calculator for India.

  18. Calculating the dermal flux of chemicals with OELs based on their molecular structure: An attempt to assign the skin notation.

    PubMed

    Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir

    2010-09-01

    Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. 39 CFR 3010.21 - Calculation of annual limitation when notices of rate adjustment are 12 or more months apart.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average... values immediately preceding the Recent Average and dividing the sum by 12 (Base Average). Finally, the full year limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1...

  20. Low-Lying π* Resonances of Standard and Rare DNA and RNA Bases Studied by the Projected CAP/SAC-CI Method.

    PubMed

    Kanazawa, Yuki; Ehara, Masahiro; Sommerfeld, Thomas

    2016-03-10

    Low-lying π* resonance states of DNA and RNA bases have been investigated by the recently developed projected complex absorbing potential (CAP)/symmetry-adapted cluster-configuration interaction (SAC-CI) method using a smooth Voronoi potential as CAP. In spite of the challenging CAP applications to higher resonance states of molecules of this size, the present calculations reproduce resonance positions observed by electron transmission spectra (ETS) provided the anticipated deviations due to vibronic effects and limited basis sets are taken into account. Moreover, for the standard nucleobases, the calculated positions and widths qualitatively agree with those obtained in previous electron scattering calculations. For guanine, both keto and enol forms were examined, and the calculated values of the keto form agree clearly better with the experimental findings. In addition to these standard bases, three modified forms of cytosine, which serve as epigenetic or biomarkers, were investigated: formylcytosine, methylcytosine, and chlorocytosine. Last, a strong correlation between the computed positions and the observed ETS values is demonstrated, clearly suggesting that the present computational protocol should be useful for predicting the π* resonances of congeners of DNA and RNA bases.

  1. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    PubMed

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  2. Implementation of Online Promethee Method for Poor Family Change Rate Calculation

    NASA Astrophysics Data System (ADS)

    Aji, Dhady Lukito; Suryono; Widodo, Catur Edi

    2018-02-01

    This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  4. AUI&GIV: Recommendation with Asymmetric User Influence and Global Importance Value.

    PubMed

    Zhao, Zhi-Lin; Wang, Chang-Dong; Lai, Jian-Huang

    2016-01-01

    The user-based collaborative filtering (CF) algorithm is one of the most popular approaches for making recommendation. Despite its success, the traditional user-based CF algorithm suffers one serious problem that it only measures the influence between two users based on their symmetric similarities calculated by their consumption histories. It means that, for a pair of users, the influences on each other are the same, which however may not be true. Intuitively, an expert may have an impact on a novice user but a novice user may not affect an expert at all. Besides, each user may possess a global importance factor that affects his/her influence to the remaining users. To this end, in this paper, we propose an asymmetric user influence model to measure the directed influence between two users and adopt the PageRank algorithm to calculate the global importance value of each user. And then the directed influence values and the global importance values are integrated to deduce the final influence values between two users. Finally, we use the final influence values to improve the performance of the traditional user-based CF algorithm. Extensive experiments have been conducted, the results of which have confirmed that both the asymmetric user influence model and global importance value play key roles in improving recommendation accuracy, and hence the proposed method significantly outperforms the existing recommendation algorithms, in particular the user-based CF algorithm on the datasets of high rating density.

  5. AUI&GIV: Recommendation with Asymmetric User Influence and Global Importance Value

    PubMed Central

    Zhao, Zhi-Lin; Wang, Chang-Dong; Lai, Jian-Huang

    2016-01-01

    The user-based collaborative filtering (CF) algorithm is one of the most popular approaches for making recommendation. Despite its success, the traditional user-based CF algorithm suffers one serious problem that it only measures the influence between two users based on their symmetric similarities calculated by their consumption histories. It means that, for a pair of users, the influences on each other are the same, which however may not be true. Intuitively, an expert may have an impact on a novice user but a novice user may not affect an expert at all. Besides, each user may possess a global importance factor that affects his/her influence to the remaining users. To this end, in this paper, we propose an asymmetric user influence model to measure the directed influence between two users and adopt the PageRank algorithm to calculate the global importance value of each user. And then the directed influence values and the global importance values are integrated to deduce the final influence values between two users. Finally, we use the final influence values to improve the performance of the traditional user-based CF algorithm. Extensive experiments have been conducted, the results of which have confirmed that both the asymmetric user influence model and global importance value play key roles in improving recommendation accuracy, and hence the proposed method significantly outperforms the existing recommendation algorithms, in particular the user-based CF algorithm on the datasets of high rating density. PMID:26828803

  6. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  7. A colored petri nets based workload evaluation model and its validation through Multi-Attribute Task Battery-II.

    PubMed

    Wang, Peng; Fang, Weining; Guo, Beiyuan

    2017-04-01

    This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. On the radiated EMI current extraction of dc transmission line based on corona current statistical measurements

    NASA Astrophysics Data System (ADS)

    Yi, Yong; Chen, Zhengying; Wang, Liming

    2018-05-01

    Corona-originated discharge of DC transmission lines is the main reason for the radiated electromagnetic interference (EMI) field in the vicinity of transmission lines. A joint time-frequency analysis technique was proposed to extract the radiated EMI current (excitation current) of DC corona based on corona current statistical measurements. A reduced-scale experimental platform was setup to measure the statistical distributions of current waveform parameters of aluminum conductor steel reinforced. Based on the measured results, the peak value, root-mean-square value and average value with 9 kHz and 200 Hz band-with of 0.5 MHz radiated EMI current were calculated by the technique proposed and validated with conventional excitation function method. Radio interference (RI) was calculated based on the radiated EMI current and a wire-to-plate platform was built for the validity of the RI computation results. The reason for the certain deviation between the computations and measurements was detailed analyzed.

  9. Microscopic study of spin cut-off factors of nuclear level densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholami, M.; Kildir, M.; Behkami, A. N.

    Level densities and spin cut-off factors have been investigated within the microscopic approach based on the BCS Hamiltonian. In particular, the spin cut-off parameters have been calculated at neutron binding energies over a large range of nuclear mass using the BCS theory. The spin cut-off parameters {sigma}{sup 2}(E) have also been obtained from the Gilbert and Cameron expression and from rigid body calculations. The results were compared with their corresponding macroscopic values. It was found that the values of {sigma}{sup 2}(E) did not increase smoothly with A as expected based on macroscopic theory. Instead, the values of {sigma}{sup 2}(E) showmore » structure reflecting the angular momentum of the shell model orbitals near the Fermi energy.« less

  10. FEV1/FVC and FEV1 for the assessment of chronic airflow obstruction in prevalence studies: do prediction equations need revision?

    PubMed

    Roche, Nicolas; Dalmay, François; Perez, Thierry; Kuntz, Claude; Vergnenègre, Alain; Neukirch, Françoise; Giordanella, Jean-Pierre; Huchon, Gérard

    2008-11-01

    Little is known on the long-term validity of reference equations used in the calculation of FEV(1) and FEV(1)/FVC predicted values. This survey assessed the prevalence of chronic airflow obstruction in a population-based sample and how it is influenced by: (i) the definition of airflow obstruction; and (ii) equations used to calculate predicted values. Subjects aged 45 or more were recruited in health prevention centers, performed spirometry and fulfilled a standardized ECRHS-derived questionnaire. Previously diagnosed cases and risk factors were identified. Prevalence of airflow obstruction was calculated using: (i) ATS-GOLD definition (FEV(1)/FVC<0.70); and (ii) ERS definition (FEV(1)/FVC

  11. Spectroscopic investigation, HOMO-LUMO and NLO studies on L-histidinium maleate based on DFT approach

    NASA Astrophysics Data System (ADS)

    Dhanavel, S.; Stephen, A.; Asirvatham, P. Samuel

    2017-05-01

    The molecular structure of the title compound L-Histidinium Maleate (LHM) was constructed and optimized based on Density Functional Theory method (DFT-B3LYP) with the 6-31G (d,p) basis set. The fundamental vibrational spectral assignment was analyzed with the aid of optimized structure of LHM. The study on electronic properties such as, HOMO-LUMO energies and absorption wavelength were performed using Time dependent DFT (TD-DFT) approach which reveals that energy transfer occur within the molecule. 13C NMR chemical shift values were measured using Gauge independent atomic orbital method (GIAO) and the obtained values are in good agreement with the reported experimental values. Hardness, ionization potential and electrophilicity index also calculated. The electric dipole moment (μtot) and hyperpolarizability (βtot) values of the investigated molecules were computed. The calculated value (β) was 3.7 times higher than that of urea, which confirms the LHM molecule is a potential candidate for NLO applications.

  12. Reporting the national antimicrobial consumption in Danish pigs: influence of assigned daily dosage values and population measurement.

    PubMed

    Dupont, Nana; Fertner, Mette; Kristensen, Charlotte Sonne; Toft, Nils; Stege, Helle

    2016-05-03

    Transparent calculation methods are crucial when investigating trends in antimicrobial consumption over time and between populations. Until 2011, one single standardized method was applied when quantifying the Danish pig antimicrobial consumption with the unit "Animal Daily Dose" (ADD). However, two new methods for assigning values for ADDs have recently emerged, one implemented by DANMAP, responsible for publishing annual reports on antimicrobial consumption, and one by the Danish Veterinary and Food Administration (DVFA), responsible for the Yellow Card initiative. In addition to new ADD assignment methods, Denmark has also experienced a shift in the production pattern, towards a larger export of live pigs. The aims of this paper were to (1) describe previous and current ADD assignment methods used by the major Danish institutions and (2) to illustrate how ADD assignment method and choice of population and population measurement affect the calculated national antimicrobial consumption in pigs (2007-2013). The old VetStat ADD-values were based on SPCs in contrast to the new ADD-values, which were based on active compound, concentration and administration route. The new ADD-values stated by both DANMAP and DVFA were only identical for 48 % of antimicrobial products approved for use in pigs. From 2007 to 2013, the total number of ADDs per year increased by 9 % when using the new DVFA ADD-values, but decreased by 2 and 7 % when using the new DANMAP ADD-values or the old VetStat ADD-values, respectively. Through 2007 to 2013, the production of pigs increased from 26.1 million pigs per year with 18 % exported live to 28.7 million with 34 % exported live. In the same time span, the annual pig antimicrobial consumption increased by 22.2 %, when calculated using the new DVFA ADD-values and pigs slaughtered per year as population measurement (13.0 ADDs/pig/year to 15.9 ADDs/pig/year). However, when based on the old VetStat ADD values and pigs produced per year (including live export), a 10.9 % decrease was seen (10.6 ADDs/pig/year to 9.4 ADDs/pig/year). The findings of this paper clearly highlight that calculated national antimicrobial consumption is highly affected by chosen population measurement and the applied ADD-values.

  13. Space shuttle engineering and operations support. Isolation between the S-band quad antenna and the S-band payload antenna. Engineering systems analysis

    NASA Technical Reports Server (NTRS)

    Lindsey, J. F.

    1976-01-01

    The isolation between the upper S-band quad antenna and the S-band payload antenna on the shuttle orbiter is calculated using a combination of plane surface and curved surface theories along with worst case values. A minimum value of 60 db isolation is predicted based on recent antenna pattern data, antenna locations on the orbiter, curvature effects, dielectric covering effects and edge effects of the payload bay. The calculated value of 60 db is significantly greater than the baseline value of 40 db. Use of the new value will result in the design of smaller, lighter weight and less expensive filters for S-band transponder and the S-band payload interrogator.

  14. Modeling Active Contraction and Relaxation of Left Ventricle Using Different Zero-load Diastole and Systole Geometries for Better Material Parameter Estimation and Stress/Strain Calculations

    PubMed Central

    Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin

    2018-01-01

    Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young’s moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YMf) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YMf was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YMf was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations. PMID:29399004

  15. Modeling Active Contraction and Relaxation of Left Ventricle Using Different Zero-load Diastole and Systole Geometries for Better Material Parameter Estimation and Stress/Strain Calculations.

    PubMed

    Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin

    2016-01-01

    Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young's moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YM f ) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YM f was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YM f was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations.

  16. Pcetk: A pDynamo-based Toolkit for Protonation State Calculations in Proteins.

    PubMed

    Feliks, Mikolaj; Field, Martin J

    2015-10-26

    Pcetk (a pDynamo-based continuum electrostatic toolkit) is an open-source, object-oriented toolkit for the calculation of proton binding energetics in proteins. The toolkit is a module of the pDynamo software library, combining the versatility of the Python scripting language and the efficiency of the compiled languages, C and Cython. In the toolkit, we have connected pDynamo to the external Poisson-Boltzmann solver, extended-MEAD. Our goal was to provide a modern and extensible environment for the calculation of protonation states, electrostatic energies, titration curves, and other electrostatic-dependent properties of proteins. Pcetk is freely available under the CeCILL license, which is compatible with the GNU General Public License. The toolkit can be found on the Web at the address http://github.com/mfx9/pcetk. The calculation of protonation states in proteins requires a knowledge of pKa values of protonatable groups in aqueous solution. However, for some groups, such as protonatable ligands bound to protein, the pKa aq values are often difficult to obtain from experiment. As a complement to Pcetk, we revisit an earlier computational method for the estimation of pKa aq values that has an accuracy of ± 0.5 pKa-units or better. Finally, we verify the Pcetk module and the method for estimating pKa aq values with different model cases.

  17. Effect of Different Gums on Rheological Properties of Slurry

    NASA Astrophysics Data System (ADS)

    Weikey, Yogita; Sinha, S. L.; Dewangan, S. K.

    2018-02-01

    This paper presents the effect of different natural gums on water-bentonite slurry, which is used as based fluid in water based drilling fluid. The gums used are Babul gum (Acacia nilotica), Dhawda gum (Anogeissus latifolia), Katira gum (Cochlospermum religiosum) and Semal gum (Bombax ceiba). For present investigation, samples have been prepared by varying concentration of gums. The variation of shear stress and shear rate has been plotted and on the basis of this behaviour of fluids has been explained. The value of k and n are calculated by using Power law. R 2 values are also calculated to support the choice of gum selection.

  18. Comparison of isothermal and cyclic oxidation behavior of twenty-five commercial sheet alloys at 1150 C

    NASA Technical Reports Server (NTRS)

    Barrett, C. A.; Lowell, C. E.

    1974-01-01

    The cyclic and isothermal oxidation resistance of 25 high-temperature Ni-, Co-, and Fe-base sheet alloys after 100 hours in air at 1150 C was compared. The alloys were evaluated in terms of their oxidation, scaling, and vaporization rates and their tendency for scale spallation. These values were used to develop an oxidation rating parameter based on effective thickness change, as calculated from a mass balance. The calculated thicknesses generally agreed with the measured values, including grain boundary oxidation, to within a factor of 3. Oxidation behavior was related to composition, particularly Cr and Al content.

  19. A modeling approach to account for toxicokinetic interactions in the calculation of biological hazard index for chemical mixtures.

    PubMed

    Haddad, S; Tardif, R; Viau, C; Krishnan, K

    1999-09-05

    Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.

  20. The effect of nanoparticle surfactant polarization on trapping depth of vegetable insulating oil-based nanofluids

    NASA Astrophysics Data System (ADS)

    Li, Jian; Du, Bin; Wang, Feipeng; Yao, Wei; Yao, Shuhan

    2016-02-01

    Nanoparticles can generate charge carrier trapping and reduce the velocity of streamer development in insulating oils ultimately leading to an enhancement of the breakdown voltage of insulating oils. Vegetable insulating oil-based nanofluids with three sizes of monodispersed Fe3O4 nanoparticles were prepared and their trapping depths were measured by thermally stimulated method (TSC). It is found that the nanoparticle surfactant polarization can significantly influence the trapping depth of vegetable insulating oil-based nanofluids. A nanoparticle polarization model considering surfactant polarization was proposed to calculate the trapping depth of the nanofluids at different nanoparticle sizes and surfactant thicknesses. The results show the calculated values of the model are in a fairly good agreement with the experimental values.

  1. Zinc finger protein binding to DNA: an energy perspective using molecular dynamics simulation and free energy calculations on mutants of both zinc finger domains and their specific DNA bases.

    PubMed

    Hamed, Mazen Y; Arya, Gaurav

    2016-05-01

    Energy calculations based on MM-GBSA were employed to study various zinc finger protein (ZF) motifs binding to DNA. Mutants of both the DNA bound to their specific amino acids were studied. Calculated energies gave evidence for a relationship between binding energy and affinity of ZF motifs to their sites on DNA. ΔG values were -15.82(12), -3.66(12), and -12.14(11.6) kcal/mol for finger one, finger two, and finger three, respectively. The mutations in the DNA bases reduced the value of the negative energies of binding (maximum value for ΔΔG = 42Kcal/mol for F1 when GCG mutated to GGG, and ΔΔG = 22 kcal/mol for F2, the loss in total energy of binding originated in the loss in electrostatic energies upon mutation (r = .98). The mutations in key amino acids in the ZF motif in positions-1, 2, 3, and 6 showed reduced binding energies to DNA with correlation coefficients between total free energy and electrostatic was .99 and with Van der Waal was .93. Results agree with experimentally found selectivity which showed that Arginine in position-1 is specific to G, while Aspartic acid (D) in position 2 plays a complicated role in binding. There is a correlation between the MD calculated free energies of binding and those obtained experimentally for prepared ZF motifs bound to triplet bases in other reports (), our results may help in the design of ZF motifs based on the established recognition codes based on energies and contributing energies to the total energy.

  2. An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220

    NASA Astrophysics Data System (ADS)

    Rezaeian, P.; Ataenia, V.; Shafiei, S.

    2017-12-01

    In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.

  3. [Nonnative guidelines for allocating human resources in child and adolescent psychiatry using average values under convergence conditions instead of price determination - analysis of the data of university hospitals in Germany concerning the costs of calculating day and minute values according to Psych-PV and PEPP-System].

    PubMed

    Barufka, Steffi; Heller, Michael; Prayon, Valeria; Fegert, Jörg M

    2015-11-01

    Despite substantial opposition in the practical field, based on an amendment to the Hospital Financing Act (KHG). the so-called PEPP-System was introduced in child and adolescent psychiatry as a new calculation model. The 2-year moratorium, combined with the rescheduling of the repeal of the psychiatry personnel regulation (Psych-PV) and a convergence phase, provided the German Federal Ministry of Health with additional time to enter a structured dialogue with professional associations. Especially the perspective concerning the regulatory framework is presently unclear. In light of this debate, this article provides calculations to illustrate the transformation of the previous personnel regulation into the PEPP-System by means of the data of §21 KHEntgG stemming from the 22 university hospitals of child and adolescent psychiatry and psychotherapy in Germany. In 2013 there was a total of 7,712 cases and 263,694 calculation days. In order to identify a necessary basic reimbursement value th1\\t would guarantee a constant quality of patient care, the authors utilize outcomes, cost structures, calculation days, and minute values for individual professional groups according to both systems (Psych-PV and PEPP) based on data from 2013 and the InEK' s analysis of the calculation datasets. The authors propose a normative agreement on the basic reimbursement value between 270 and 285 EUR. This takes into account the concentration phenomenon and the expansion of services that has occurred since the introduction of the Psych-PV system. Such a normative agreement on structural quality could provide a verifiable framework for the allocation of human resources corresponding to the previous regulations of Psych-PV.

  4. Nonmarket economic user values of the Florida Keys/Key West

    Treesearch

    Vernon R. Leeworthy; J. Michael Bowker

    1997-01-01

    This report provides estimates of the nonmarket economic user values for recreating visitors to the Florida Keys/Key West that participated in natural resource-based activities. Results from estimated travel cost models are presented, including visitor’s responses to prices and estimated per person-trip user values. Annual user values are also calculated and presented...

  5. A genetic analysis of post-weaning feedlot performance and profitability in Bonsmara cattle.

    PubMed

    van der Westhuizen, R R; van der Westhuizen, J; Schoeman, S J

    2009-02-25

    The aim of this study was to identify factors influencing profitability in a feedlot environment and to estimate genetic parameters for and between a feedlot profit function and productive traits measured in growth tests. The heritability estimate of 0.36 for feedlot profitability shows that this trait is genetically inherited and that it can be selected for. The genetic correlations between feedlot profitability and production and efficiency varied from negligible to high. The genetic correlation estimate of -0.92 between feed conversion ratio and feedlot profitability is largely due to the part-whole relationship between these two traits. Consequently, a multiple regression equation was developed to estimate a feed intake value for all performance-tested Bonsmara bulls, which were group fed and whose feed intakes were unknown. These predicted feed intake values enabled the calculation of a post-weaning growth or feedlot profitability value for all tested bulls, even where individual feed intakes were unknown. Subsequently, a feedlot profitability value for each bull was calculated in a favorable economic environment, an average economic environment and in an unfavorable economic environment. The high Pearson and Spearman correlations between the estimate breeding values based on the average economic environment and the other two environments suggested that the average economic environment could be used to calculate estimate breeding values for feedlot profitability. It is therefore not necessary to change the carcass, weaned calf or feed price on a regular basis to allow for possible re-rankings based on estimate breeding values.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jentschura, Ulrich D.; National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8401; Mohr, Peter J.

    We describe the calculation of hydrogenic (one-loop) Bethe logarithms for all states with principal quantum numbers n{<=}200. While, in principle, the calculation of the Bethe logarithm is a rather easy computational problem involving only the nonrelativistic (Schroedinger) theory of the hydrogen atom, certain calculational difficulties affect highly excited states, and in particular states for which the principal quantum number is much larger than the orbital angular momentum quantum number. Two evaluation methods are contrasted. One of these is based on the calculation of the principal value of a specific integral over a virtual photon energy. The other method relies directlymore » on the spectral representation of the Schroedinger-Coulomb propagator. Selected numerical results are presented. The full set of values is available at arXiv.org/quant-ph/0504002.« less

  7. Financial methods for waterflooding injectate design

    DOEpatents

    Heneman, Helmuth J.; Brady, Patrick V.

    2017-08-08

    A method of selecting an injectate for recovering liquid hydrocarbons from a reservoir includes designing a plurality of injectates, calculating a net present value of each injectate, and selecting a candidate injectate based on the net present value. For example, the candidate injectate may be selected to maximize the net present value of a waterflooding operation.

  8. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  9. Molecular structure, spectroscopic studies and first-order molecular hyperpolarizabilities of ferulic acid by density functional study

    NASA Astrophysics Data System (ADS)

    Sebastian, S.; Sundaraganesan, N.; Manoharan, S.

    2009-10-01

    Quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of ferulic acid (FA) (4-hydroxy-3-methoxycinnamic acid) were carried out by using density functional (DFT/B3LYP/BLYP) method with 6-31G(d,p) as basis set. The optimized geometrical parameters obtained by DFT calculations are in good agreement with single crystal XRD data. The vibrational spectral data obtained from solid phase FT-IR and FT-Raman spectra are assigned based on the results of the theoretical calculations. The observed spectra are found to be in good agreement with calculated values. The electric dipole moment ( μ) and the first hyperpolarizability ( β) values of the investigated molecule have been computed using ab initio quantum mechanical calculations. The calculation results also show that the FA molecule might have microscopic nonlinear optical (NLO) behavior with non-zero values. A detailed interpretation of the infrared and Raman spectra of FA was also reported. The energy and oscillator strength calculated by time-dependent density functional theory (TD-DFT) results complements with the experimental findings. The calculated HOMO and LUMO energies shows that charge transfer occur within the molecule. The theoretical FT-IR and FT-Raman spectra for the title molecule have been constructed.

  10. Intensity of emission lines of the quiescent solar corona: comparison between calculated and observed values

    NASA Astrophysics Data System (ADS)

    Krissinel, Boris

    2018-03-01

    The paper reports the results of calculations of the center-to-limb intensity of optically thin line emission in EUV and FUV wavelength ranges. The calculations employ a multicomponent model for the quiescent solar corona. The model includes a collection of loops of various sizes, spicules, and free (inter-loop) matter. Theoretical intensity values are found from probabilities of encountering parts of loops in the line of sight with respect to the probability of absence of other coronal components. The model uses 12 loops with sizes from 3200 to 210000 km with different values of rarefaction index and pressure at the loop base and apex. The temperature at loop apices is 1 400 000 K. The calculations utilize the CHIANTI database. The comparison between theoretical and observed emission intensity values for coronal and transition region lines obtained by the SUMER, CDS, and EIS telescopes shows quite satisfactory agreement between them, particularly for the solar disk center. For the data acquired above the limb, the enhanced discrepancies after the analysis refer to errors in EIS measurements.

  11. Density functional calculations of the Mössbauer parameters in hexagonal ferrite SrFe12O19

    NASA Astrophysics Data System (ADS)

    Ikeno, Hidekazu

    2018-03-01

    Mössbauer parameters in a magnetoplumbite-type hexagonal ferrite, SrFe12O19, are computed using the all-electron band structure calculation based on the density functional theory. The theoretical isomer shift and quadrupole splitting are consistent with experimentally obtained values. The absolute values of hyperfine splitting parameters are found to be underestimated, but the relative scale can be reproduced. The present results validate the site-dependence of Mössbauer parameters obtained by analyzing experimental spectra of hexagonal ferrites. The results also show the usefulness of theoretical calculations for increasing the reliability of interpretation of the Mössbauer spectra.

  12. Group Additivity Determination for Oxygenates, Oxonium Ions, and Oxygen-Containing Carbenium Ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellon, Lauren D.; Sung, Chun-Yi; Robichaud, David J.

    Bio-oil produced from biomass fast pyrolysis often requires catalytic upgrading to remove oxygen and acidic species over zeolite catalysts. The elementary reactions in the mechanism for this process involve carbenium and oxonium ions. In order to develop a detailed kinetic model for the catalytic upgrading of biomass, rate constants are required for these elementary reactions. The parameters in the Arrhenius equation can be related to thermodynamic properties through structure-reactivity relationships, such as the Evans-Polanyi relationship. For this relationship, enthalpies of formation of each species are required, which can be reasonably estimated using group additivity. However, the literature previously lacked groupmore » additivity values for oxygenates, oxonium ions, and oxygen-containing carbenium ions. In this work, 71 group additivity values for these types of groups were regressed, 65 of which had not been reported previously and six of which were newly estimated based on regression in the context of the 65 new groups. Heats of formation based on atomization enthalpy calculations for a set of reference molecules and isodesmic reactions for a small set of larger species for which experimental data was available were used to demonstrate the accuracy of the Gaussian-4 quantum mechanical method in estimating enthalpies of formation for species involving the moieties of interest. Isodesmic reactions for a total of 195 species were constructed from the reference molecules to calculate enthalpies of formation that were used to regress the group additivity values. The results showed an average deviation of 1.95 kcal/mol between the values calculated from Gaussian-4 and isodesmic reactions versus those calculated from the group additivity values that were newly regressed. Importantly, the new groups enhance the database for group additivity values, especially those involving oxonium ions.« less

  13. Comparison of different methods of inter-eye asymmetry of rim area and disc area analysis

    PubMed Central

    Fansi, A A K; Boisjoly, H; Chagnon, M; Harasymowycz, P J

    2011-01-01

    Purpose To describe different methods of inter-eye asymmetry of rim area (RA) to disc area (DA) asymmetry ratio (RADAAR) analysis. Methods This was an observational, descriptive, and cross-sectional study. Both the eyes of all participants underwent confocal scanning laser ophthalmoscopy (Heidelberg retina tomograph (HRT 3)), frequency-doubling technology perimetry (FDT), and complete ophthalmological examination. Based on ophthalmological clinical examination and FDT results of the worse eye, subjects were classified as either normal, possible glaucoma, and probable glaucoma or definitive glaucoma. RADAAR values were calculated based on stereometric HRT 3 values using different mathematical formulae. RADAAR-1 was calculated as a relative difference of rim and DAs between the eyes. RADAAR-2 was calculated by subtracting the value of rim to DA ratio of the smaller disc from the value of rim to DA ratio of the larger disc. RADAAR-3 was calculated by dividing the previous two values. Statistical analyses included ANOVA as well as Student t-tests. Results Data of 334 participants were analysed, 78 of which were classified as definitive glaucoma. RADAAR-1 values were significantly different between the four different groups of diagnosis (F=5.82; P<0.001). The 1st and 99th percentile limits of normality for RADAAR-1, RADAAR-2, and RADAAR-3 in normal group were, respectively, −10.64 and 8.4; −0.32 and 0.22; and 0.58 and 1.32. Conclusions RADAAR-1 seems to best distinguish between the diagnostic groups. Knowledge of RADAAR distribution in various diagnostic groups may aid in clinical diagnosis of asymmetric glaucomatous damage. PMID:21921945

  14. Estimation of PV energy production based on satellite data

    NASA Astrophysics Data System (ADS)

    Mazurek, G.

    2015-09-01

    Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.

  15. Solar UV radiation exposure of seamen - Measurements, calibration and model calculations of erythemal irradiance along ship routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feister, Uwe; Meyer, Gabriele; Kirst, Ulrich

    2013-05-10

    Seamen working on vessels that go along tropical and subtropical routes are at risk to receive high doses of solar erythemal radiation. Due to small solar zenith angles and low ozone values, UV index and erythemal dose are much higher than at mid-and high latitudes. UV index values at tropical and subtropical Oceans can exceed UVI = 20, which is more than double of typical mid-latitude UV index values. Daily erythemal dose can exceed the 30-fold of typical midlatitude winter values. Measurements of erythemal exposure of different body parts on seamen have been performed along 4 routes of merchant vessels.more » The data base has been extended by two years of continuous solar irradiance measurements taken on the mast top of RV METEOR. Radiative transfer model calculations for clear sky along the ship routes have been performed that use satellite-based input for ozone and aerosols to provide maximum erythemal irradiance and dose. The whole data base is intended to be used to derive individual erythemal exposure of seamen during work-time.« less

  16. Osmotic potential calculations of inorganic and organic aqueous solutions over wide solute concentration levels and temperatures.

    PubMed

    Cochrane, T T; Cochrane, T A

    2016-01-01

    To demonstrate that the authors' new "aqueous solution vs pure water" equation to calculate osmotic potential may be used to calculate the osmotic potentials of inorganic and organic aqueous solutions over wide ranges of solute concentrations and temperatures. Currently, the osmotic potentials of solutions used for medical purposes are calculated from equations based on the thermodynamics of the gas laws which are only accurate at low temperature and solute concentration levels. Some solutions used in medicine may need their osmotic potentials calculated more accurately to take into account solute concentrations and temperatures. The authors experimented with their new equation for calculating the osmotic potentials of inorganic and organic aqueous solutions up to and beyond body temperatures by adjusting three of its factors; (a) the volume property of pure water, (b) the number of "free" water molecules per unit volume of solution, "Nf," and (c) the "t" factor expressing the cooperative structural relaxation time of pure water at given temperatures. Adequate information on the volume property of pure water at different temperatures is available in the literature. However, as little information on the relative densities of inorganic and organic solutions, respectively, at varying temperatures needed to calculate Nf was available, provisional equations were formulated to approximate values. Those values together with tentative t values for different temperatures chosen from values calculated by different workers were substituted into the authors' equation to demonstrate how osmotic potentials could be estimated over temperatures up to and beyond bodily temperatures. The provisional equations formulated to calculate Nf, the number of free water molecules per unit volume of inorganic and organic solute solutions, respectively, over wide concentration ranges compared well with the calculations of Nf using recorded relative density data at 20 °C. They were subsequently used to estimate Nf values at temperatures up to and excess of body temperatures. Those values, together with t values at temperatures up to and in excess of body temperatures recorded in the literature, were substituted in the authors' equation for the provisional calculation of osmotic potentials. The calculations indicated that solution temperatures and solute concentrations have a marked effect on osmotic potentials. Following work to measure the relative densities of aqueous solutions for the calculation of Nf values and the determination of definitive t values up to and beyond bodily temperatures, the authors' equation would enable the accurate estimations of the osmotic potentials of wide concentrations of aqueous solutions of inorganic and organic solutes over the temperature range. The study illustrates that not only solute concentrations but also temperatures have a marked effect on osmotic potentials, an observation of medical and biological significance.

  17. Online plasma calculator

    NASA Astrophysics Data System (ADS)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  18. 21 CFR 868.1880 - Pulmonary-function data calculator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Pulmonary-function data calculator. 868.1880 Section 868.1880 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES...-function values based on actual physical data obtained during pulmonary-function testing. (b...

  19. 21 CFR 868.1880 - Pulmonary-function data calculator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Pulmonary-function data calculator. 868.1880 Section 868.1880 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES...-function values based on actual physical data obtained during pulmonary-function testing. (b...

  20. Contribution of the palate to denture base support: an in vivo study.

    PubMed

    Ando, Takanori; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya

    2014-01-01

    The aim of this study was to examine the contribution of the palate to denture base support. Four subjects with tooth- or implant-supported maxillary overdentures were enrolled. Recordings (strain values converted to load values) were performed using miniature strain gauges and force transducers for the following conditions: metal framework only (A), denture base with full palatal coverage (B), and denture base without palatal coverage (C). The palatal-supporting ratio (PSR) was calculated using the equation PSR = (B - C) / A. The PSR values were less than 10% in all subjects, suggesting that the palate plays a minimal role in denture base support.

  1. Measurements of Band Intensities, Herman-Wallis Parameters, and Self-Broadening Line-Widths of the 30011 - 00001 and 30014 - 00001 Bands of CO2 at 6503 cm(exp -1) and 6076 cm(exp -1)

    NASA Technical Reports Server (NTRS)

    Giver, L. P.; Brown, L. R.; Wattson, R. B.; Spencer, M. N.; Chackerian, C., Jr.; Strawa, Anthony W. (Technical Monitor)

    1995-01-01

    Rotationless band intensities and Herman-Wallis parameters are listed in HITRAN tabulations for several hundred CO2 overtone-combination bands. These parameters are based on laboratory measurements when available, and on DND calculations for the unmeasured bands. The DND calculations for the Fermi interacting nv(sub 1) + v(sub 3) polyads show the a(sub 2) Herman-Wallis parameter varying smoothly from a negative value for the first member of the polyad to a positive value for the final member. Measurements of the v(sub 1) + v(sub 3) dyad are consistent with the DND calculations for the a(sub 2) parameter, as are our recent measurements of the 4v(sub 1) + v(sub 3) pentad. However, the measurement-based values in the HITRAN tables for the 2v(sub 1) + v(sub 3) triad and the 3v(sub 1) + v(sub 3) tetrad do not support the DND calculated values for the a(sub 2) parameters. We therefore decided to make new measurements to improve some of these intensity parameters. With the McMath FTS at Kitt Peak National Observatory/National Solar Observatory we recorded several spectra of the. 4000 to 8000 cm(exp -1) region of pure CO2 at 0.011 cm(exp -1) resolution using the 6 meter White absorption cell. The signal/noise and absorbance of the first and fourth bands of the 3v(sub 1) + v(sub 3) tetrad of C-12O-16 were ideal on these spectra for measuring line intensities and broadening widths. Our selfbroadening results agree with the HITRAN parameterization, while our measurements of the rotationless band intensities are about 15% less than the HITRAN values. We find a negative value of a(sub 2) for the 30011-00001 band and a positive value for the 30014-00001 band, whereas the HITRAN values of a(sub 2) are positive for all four tetrad bands. Our a(sub 1) and a(sub 2) Herman-Wallis parameters are closer to DND calculated values than the 1992 HITRAN values for both the 30011-00001 and the 30014-00001 band.

  2. Estimates of electronic coupling for excess electron transfer in DNA

    NASA Astrophysics Data System (ADS)

    Voityuk, Alexander A.

    2005-07-01

    Electronic coupling Vda is one of the key parameters that determine the rate of charge transfer through DNA. While there have been several computational studies of Vda for hole transfer, estimates of electronic couplings for excess electron transfer (ET) in DNA remain unavailable. In the paper, an efficient strategy is established for calculating the ET matrix elements between base pairs in a π stack. Two approaches are considered. First, we employ the diabatic-state (DS) method in which donor and acceptor are represented with radical anions of the canonical base pairs adenine-thymine (AT) and guanine-cytosine (GC). In this approach, similar values of Vda are obtained with the standard 6-31G* and extended 6-31++G** basis sets. Second, the electronic couplings are derived from lowest unoccupied molecular orbitals (LUMOs) of neutral systems by using the generalized Mulliken-Hush or fragment charge methods. Because the radical-anion states of AT and GC are well reproduced by LUMOs of the neutral base pairs calculated without diffuse functions, the estimated values of Vda are in good agreement with the couplings obtained for radical-anion states using the DS method. However, when the calculation of a neutral stack is carried out with diffuse functions, LUMOs of the system exhibit the dipole-bound character and cannot be used for estimating electronic couplings. Our calculations suggest that the ET matrix elements Vda for models containing intrastrand thymine and cytosine bases are essentially larger than the couplings in complexes with interstrand pyrimidine bases. The matrix elements for excess electron transfer are found to be considerably smaller than the corresponding values for hole transfer and to be very responsive to structural changes in a DNA stack.

  3. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PROBABILISTIC APPROACH FOR CALCULATING INGESTION EXPOSURE FROM DAY 4 COMPOSITE MEASUREMENTS, THE DIRECT METHOD OF EXPOSURE ESTIMATION (IIT-A-15.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures undertaken to calculate the ingestion exposure using composite food chemical residue values from the day of direct measurements. The calculation is based on the probabilistic approach. This SOP uses data that have been proper...

  4. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    NASA Astrophysics Data System (ADS)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

  5. Surface Tension of Liquid Alkali, Alkaline, and Main Group Metals: Theoretical Treatment and Relationship Investigations

    NASA Astrophysics Data System (ADS)

    Aqra, Fathi; Ayyad, Ahmed

    2011-09-01

    An improved theoretical method for calculating the surface tension of liquid metals is proposed. A recently derived equation that allows an accurate estimate of surface tension to be made for the large number of elements, based on statistical thermodynamics, is used for a means of calculating reliable values for the surface tension of pure liquid alkali, alkaline earth, and main group metals at the melting point, In order to increase the validity of the model, the surface tension of liquid lithium was calculated in the temperature range 454 K to 1300 K (181 °C to 1027 °C), where the calculated surface tension values follow a straight line behavior given by γ = 441 - 0.15 (T-Tm) (mJ m-2). The calculated surface excess entropy of liquid Li (- dγ/ dT) was found to be 0.15 mJ m-2 K-1, which agrees well with the reported experimental value (0.147 mJ/m2 K). Moreover, the relations of the calculated surface tension of alkali metals to atomic radius, heat of fusion, and specific heat capacity are described. The results are in excellent agreement with the existing experimental data.

  6. Collisional Shift and Broadening of Iodine Spectral Lines in Air Near 543 nm

    NASA Technical Reports Server (NTRS)

    Fletcher, D. G.; McDaniel, J. C.

    1995-01-01

    The collisional processes that influence the absorption of monochromatic light by iodine in air have been investigated. Measurements were made in both a static cell and an underexpanded jet flow over the range of properties encountered in typical compressible-flow aerodynamic applications. Experimentally measured values of the collisional shift and broadening coefficients were 0.058 +/- 0.004 and 0.53 +/- 0.010 GHz K(exp 0.7)/torr, respectively. The measured shift value showed reasonable agreement with theoretical calculations based on Lindholm-Foley collisional theory for a simple dispersive potential. The measured collisional broadening showed less favorable agreement with the calculated value.

  7. The Thermochemical Stability of Ionic Noble Gas Compounds.

    ERIC Educational Resources Information Center

    Purser, Gordon H.

    1988-01-01

    Presents calculations that suggest stoichiometric, ionic, and noble gas-metal compounds may be stable. Bases calculations on estimated values of electron affinity, anionic radius for the noble gases and for the Born exponents of resulting crystals. Suggests the desirability of experiments designed to prepare compounds containing anionic,…

  8. Ab initio calculations of the lattice dynamics of silver halides

    NASA Astrophysics Data System (ADS)

    Gordienko, A. B.; Kravchenko, N. G.; Sedelnikov, A. N.

    2010-12-01

    Based on ab initio pseudopotential calculations, the results of investigations of the lattice dynamics of silver halides AgHal (Hal = Cl, Br, I) are presented. Equilibrium lattice parameters, phonon spectra, frequency densities and effective atomic-charge values are obtained for all types of crystals under study.

  9. Predictions for Proteins, RNAs and DNAs with the Gaussian Dielectric Function Using DelPhiPKa

    PubMed Central

    Wang, Lin; Li, Lin; Alexov, Emil

    2015-01-01

    We developed a Poisson-Boltzmann based approach to calculate the PKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating PKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large PKa shifts of various single point mutations in staphylococcal nuclease (SNase) from PKa -cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa’s. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves PKa predictions for buried carboxyl residues. Finally, the PKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. PMID:26408449

  10. Health-Based Screening Levels and their Application to Water-Quality Data

    USGS Publications Warehouse

    Toccalino, Patricia L.; Zogorski, John S.; Norman, Julia E.

    2005-01-01

    To supplement existing Federal drinking-water standards and guidelines, thereby providing a basis for a more comprehensive evaluation of contaminant-occurrence data in a human-health context, USGS began a collaborative project in 1998 with USEPA, the New Jersey Department of Environmental Protection, and the Oregon Health & Science University to calculate non-enforceable health-based screening levels. Screening levels were calculated for contaminants that do not have Maximum Contaminant Level values using a consensus approach that entailed (1) standard USEPA Office of Water methodologies (equations) for establishing Lifetime Health Advisory (LHA) and Risk-Specific Dose (RSD) values for the protection of human health, and (2) existing USEPA human-health toxicity information.

  11. FreeSolv: A database of experimental and calculated hydration free energies, with input files

    PubMed Central

    Mobley, David L.; Guthrie, J. Peter

    2014-01-01

    This work provides a curated database of experimental and calculated hydration free energies for small neutral molecules in water, along with molecular structures, input files, references, and annotations. We call this the Free Solvation Database, or FreeSolv. Experimental values were taken from prior literature and will continue to be curated, with updated experimental references and data added as they become available. Calculated values are based on alchemical free energy calculations using molecular dynamics simulations. These used the GAFF small molecule force field in TIP3P water with AM1-BCC charges. Values were calculated with the GROMACS simulation package, with full details given in references cited within the database itself. This database builds in part on a previous, 504-molecule database containing similar information. However, additional curation of both experimental data and calculated values has been done here, and the total number of molecules is now up to 643. Additional information is now included in the database, such as SMILES strings, PubChem compound IDs, accurate reference DOIs, and others. One version of the database is provided in the Supporting Information of this article, but as ongoing updates are envisioned, the database is now versioned and hosted online. In addition to providing the database, this work describes its construction process. The database is available free-of-charge via http://www.escholarship.org/uc/item/6sd403pz. PMID:24928188

  12. The use of elements of the Stewart model (Strong Ion Approach) for the diagnostics of respiratory acidosis on the basis of the calculation of a value of a modified anion gap (AGm) in brachycephalic dogs.

    PubMed

    Sławuta, P; Glińska-Suchocka, K; Cekiera, A

    2015-01-01

    Apart from the HH equation, the acid-base balance of an organism is also described by the Stewart model, which assumes that the proper insight into the ABB of the organism is given by an analysis of: pCO2, the difference of concentrations of strong cations and anions in the blood serum - SID, and the total concentration of nonvolatile weak acids - Acid total. The notion of an anion gap (AG), or the apparent lack of ions, is closely related to the acid-base balance described according to the HH equation. Its value mainly consists of negatively charged proteins, phosphates, and sulphates in blood. In the human medicine, a modified anion gap is used, which, including the concentration of the protein buffer of blood, is, in fact, the combination of the apparent lack of ions derived from the classic model and the Stewart model. In brachycephalic dogs, respiratory acidosis often occurs, which is caused by an overgrowth of the soft palate, making it impossible for a free air flow and causing an increase in pCO2--carbonic acid anhydride The aim of the present paper was an attempt to answer the question whether, in the case of systemic respiratory acidosis, changes in the concentration of buffering ions can also be seen. The study was carried out on 60 adult dogs of boxer breed in which, on the basis of the results of endoscopic examination, a strong overgrowth of the soft palate requiring a surgical correction was found. For each dog, the value of the anion gap before and after the palate correction procedure was calculated according to the following equation: AG = ([Na+ mmol/l] + [K+ mmol/l])--([Cl- mmol/l]+ [HCO3- mmol/l]) as well as the value of the modified AG--according to the following equation: AGm = calculated AG + 2.5 x (albumins(r)--albumins(d)). The values of AG calculated for the dogs before and after the procedure fell within the limits of the reference values and did not differ significantly whereas the values of AGm calculated for the dogs before and after the procedure differed from each other significantly. 1) On the basis of the values of AGm obtained it should be stated that in spite of finding respiratory acidosis in the examined dogs, changes in ion concentration can also be seen, which, according to the Stewart theory, compensate metabolic ABB disorders 2) In spite of the fact that all the values used for calculation of AGm were within the limits of reference values, the values of AGm in dogs before and after the soft palate correction procedure differed from each other significantly, which proves high sensitivity and usefulness of the AGm calculation as a diagnostic method.

  13. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  14. Quantifying N2O reduction to N2 based on N2O isotopocules - validation with independent methods (helium incubation and 15N gas flux method)

    NASA Astrophysics Data System (ADS)

    Lewicka-Szczebak, Dominika; Augustin, Jürgen; Giesemann, Anette; Well, Reinhard

    2017-02-01

    Stable isotopic analyses of soil-emitted N2O (δ15Nbulk, δ18O and δ15Nsp = 15N site preference within the linear N2O molecule) may help to quantify N2O reduction to N2, an important but rarely quantified process in the soil nitrogen cycle. The N2O residual fraction (remaining unreduced N2O, rN2O) can be theoretically calculated from the measured isotopic enrichment of the residual N2O. However, various N2O-producing pathways may also influence the N2O isotopic signatures, and hence complicate the application of this isotopic fractionation approach. Here this approach was tested based on laboratory soil incubations with two different soil types, applying two reference methods for quantification of rN2O: helium incubation with direct measurement of N2 flux and the 15N gas flux method. This allowed a comparison of the measured rN2O values with the ones calculated based on isotopic enrichment of residual N2O. The results indicate that the performance of the N2O isotopic fractionation approach is related to the accompanying N2O and N2 source processes and the most critical is the determination of the initial isotopic signature of N2O before reduction (δ0). We show that δ0 can be well determined experimentally if stable in time and then successfully applied for determination of rN2O based on δ15Nsp values. Much more problematic to deal with are temporal changes of δ0 values leading to failure of the approach based on δ15Nsp values only. For this case, we propose here a dual N2O isotopocule mapping approach, where calculations are based on the relation between δ18O and δ15Nsp values. This allows for the simultaneous estimation of the N2O-producing pathways' contribution and the rN2O value.

  15. A comparison of methods for calculating population exposure estimates of daily weather for health research.

    PubMed

    Hanigan, Ivan; Hall, Gillian; Dear, Keith B G

    2006-09-13

    To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites--that is, using proximity polygons around weather stations intersected with postal areas--tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.

  16. SU-E-CAMPUS-I-05: Internal Dosimetric Calculations for Several Imaging Radiopharmaceuticals in Preclinical Studies and Quantitative Assessment of the Mouse Size Impact On Them. Realistic Monte Carlo Simulations Based On the 4D-MOBY Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostou, T; Papadimitroulas, P; Kagadis, GC

    2014-06-15

    Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PETmore » studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known that there is a large variety in the anatomy of the organs.« less

  17. On the accuracy and reproducibility of a novel probabilistic atlas-based generation for calculation of head attenuation maps on integrated PET/MR scanners.

    PubMed

    Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian

    2017-03-01

    To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.

  18. Electro-mechanical Properties of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Anantram, M. P.; Yang, Liu; Han, Jie; Liu, J. P.; Saubum Subhash (Technical Monitor)

    1998-01-01

    We present a simple picture to understand the bandgap variation of carbon nanotubes with small tensile and torsional strains, independent of chirality. Using this picture, we are able to predict a simple dependence of d(Bandoap)$/$d(strain) on the value of $(N_x-N_y)*mod 3$, for semiconducting tubes. We also predict a novel change in sign of d(Bandgap)$/$d(strain) as a function of tensile strain arising from a change in the value of $q$ corresponding to the minimum bandgap. These calculations are complemented by calculations of the change in bandgap using energy minimized structures, and some important differences are discussed. The calculations are based on the $i$ electron approximation.

  19. Neural computing thermal comfort index PMV for the indoor environment intelligent control system

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Chen, Yifei

    2013-03-01

    Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.

  20. General Procedure for the Easy Calculation of pH in an Introductory Course of General or Analytical Chemistry

    ERIC Educational Resources Information Center

    Cepriá, Gemma; Salvatella, Luis

    2014-01-01

    All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…

  1. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    PubMed

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  2. The effects of variations in parameters and algorithm choices on calculated radiomics feature values: initial investigations and comparisons to feature variability across CT image acquisition conditions

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael

    2018-02-01

    Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.

  3. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  4. Scaling Atomic Partial Charges of Carbonate Solvents for Lithium Ion Solvation and Diffusion

    DOE PAGES

    Chaudhari, Mangesh I.; Nair, Jijeesh R.; Pratt, Lawrence R.; ...

    2016-10-21

    Lithium-ion solvation and diffusion properties in ethylene carbonate (EC) and propylene carbonate (PC) were studied by molecular simulation, experiments, and electronic structure calculations. Studies carried out in water provide a reference for interpretation. Classical molecular dynamics simulation results are compared to ab initio molecular dynamics to assess nonpolarizable force field parameters for solvation structure of the carbonate solvents. Quasi-chemical theory (QCT) was adapted to take advantage of fourfold occupancy of the near-neighbor solvation structure observed in simulations and used to calculate solvation free energies. The computed free energy for transfer of Li + to PC from water, based on electronicmore » structure calculations with cluster-QCT, agrees with the experimental value. The simulation-based direct-QCT results with scaled partial charges agree with the electronic structure-based QCT values. The computed Li +/PF 6 - transference numbers of 0.35/0.65 (EC) and 0.31/0.69 (PC) agree well with NMR experimental values of 0.31/0.69 (EC) and 0.34/0.66 (PC) and similar values obtained here with impedance spectroscopy. These combined results demonstrate that solvent partial charges can be scaled in systems dominated by strong electrostatic interactions to achieve trends in ion solvation and transport properties that are comparable to ab initio and experimental results. Thus, the results support the use of scaled partial charges in simple, nonpolarizable force fields in future studies of these electrolyte solutions.« less

  5. The Principal Axis Approach to Value-Added Calculation

    ERIC Educational Resources Information Center

    He, Qingping; Tymms, Peter

    2014-01-01

    The assessment of the achievement of students and the quality of schools has drawn increasing attention from educational researchers, policy makers, and practitioners. Various test-based accountability and feedback systems involving the use of value-added techniques have been developed for evaluating the effectiveness of individual teaching…

  6. Research on the Value Evaluation of Used Pure Electric Car Based on the Replacement Cost Method

    NASA Astrophysics Data System (ADS)

    Tan, zhengping; Cai, yun; Wang, yidong; Mao, pan

    2018-03-01

    In this paper, the value evaluation of the used pure electric car is carried out by the replacement cost method, which fills the blank of the value evaluation of the electric vehicle. The basic principle of using the replacement cost method, combined with the actual cost of pure electric cars, puts forward the calculation method of second-hand electric car into a new rate based on the use of AHP method to construct the weight matrix comprehensive adjustment coefficient of related factors, the improved method of value evaluation system for second-hand car

  7. [Calculation on ecological security baseline based on the ecosystem services value and the food security].

    PubMed

    He, Ling; Jia, Qi-jian; Li, Chao; Xu, Hao

    2016-01-01

    The rapid development of coastal economy in Hebei Province caused rapid transition of coastal land use structure, which has threatened land ecological security. Therefore, calculating ecosystem service value of land use and exploring ecological security baseline can provide the basis for regional ecological protection and rehabilitation. Taking Huanghua, a city in the southeast of Hebei Province, as an example, this study explored the joint point, joint path and joint method between ecological security and food security, and then calculated the ecological security baseline of Huanghua City based on the ecosystem service value and the food safety standard. The results showed that ecosystem service value of per unit area from maximum to minimum were in this order: wetland, water, garden, cultivated land, meadow, other land, salt pans, saline and alkaline land, constructive land. The order of contribution rates of each ecological function value from high to low was nutrient recycling, water conservation, entertainment and culture, material production, biodiversity maintenance, gas regulation, climate regulation and environmental purification. The security baseline of grain production was 0.21 kg · m⁻², the security baseline of grain output value was 0.41 yuan · m⁻², the baseline of ecosystem service value was 21.58 yuan · m⁻², and the total of ecosystem service value in the research area was 4.244 billion yuan. In 2081 the ecological security will reach the bottom line and the ecological system, in which human is the subject, will be on the verge of collapse. According to the ecological security status, Huanghua can be divided into 4 zones, i.e., ecological core protection zone, ecological buffer zone, ecological restoration zone and human activity core zone.

  8. Calculation of the surface tension of liquid Ga-based alloys

    NASA Astrophysics Data System (ADS)

    Dogan, Ali; Arslan, Hüseyin

    2018-05-01

    As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.

  9. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  10. Osmotic potential calculations of inorganic and organic aqueous solutions over wide solute concentration levels and temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochrane, T. T., E-mail: agteca@hotmail.com; Cochrane, T. A., E-mail: tom.cochrane@canterbury.ac.nz

    Purpose: To demonstrate that the authors’ new “aqueous solution vs pure water” equation to calculate osmotic potential may be used to calculate the osmotic potentials of inorganic and organic aqueous solutions over wide ranges of solute concentrations and temperatures. Currently, the osmotic potentials of solutions used for medical purposes are calculated from equations based on the thermodynamics of the gas laws which are only accurate at low temperature and solute concentration levels. Some solutions used in medicine may need their osmotic potentials calculated more accurately to take into account solute concentrations and temperatures. Methods: The authors experimented with their newmore » equation for calculating the osmotic potentials of inorganic and organic aqueous solutions up to and beyond body temperatures by adjusting three of its factors; (a) the volume property of pure water, (b) the number of “free” water molecules per unit volume of solution, “N{sub f},” and (c) the “t” factor expressing the cooperative structural relaxation time of pure water at given temperatures. Adequate information on the volume property of pure water at different temperatures is available in the literature. However, as little information on the relative densities of inorganic and organic solutions, respectively, at varying temperatures needed to calculate N{sub f} was available, provisional equations were formulated to approximate values. Those values together with tentative t values for different temperatures chosen from values calculated by different workers were substituted into the authors’ equation to demonstrate how osmotic potentials could be estimated over temperatures up to and beyond bodily temperatures. Results: The provisional equations formulated to calculate N{sub f}, the number of free water molecules per unit volume of inorganic and organic solute solutions, respectively, over wide concentration ranges compared well with the calculations of N{sub f} using recorded relative density data at 20 °C. They were subsequently used to estimate N{sub f} values at temperatures up to and excess of body temperatures. Those values, together with t values at temperatures up to and in excess of body temperatures recorded in the literature, were substituted in the authors’ equation for the provisional calculation of osmotic potentials. The calculations indicated that solution temperatures and solute concentrations have a marked effect on osmotic potentials. Conclusions: Following work to measure the relative densities of aqueous solutions for the calculation of N{sub f} values and the determination of definitive t values up to and beyond bodily temperatures, the authors’ equation would enable the accurate estimations of the osmotic potentials of wide concentrations of aqueous solutions of inorganic and organic solutes over the temperature range. The study illustrates that not only solute concentrations but also temperatures have a marked effect on osmotic potentials, an observation of medical and biological significance.« less

  11. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  12. Global Pattern of Potential Evaporation Calculated from the Penman-Monteith Equation Using Satellite and Assimilated Data

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.

    1997-01-01

    Potential evaporation (E(0)) has been found to be useful in many practical applications and in research for setting a reference level for actual evaporation. All previous estimates of regional or global E(0) are based upon empirical formulae using climatologic meteorologic measurements at isolated stations (i.e., point data). However, the Penman-Monteith equation provides a physically based approach for computing E(0), and by comparing 20 different methods of estimating E(0), Jensen et al. (1990) showed that the Penman-Monteith equation provides the most accurate estimate of monthly E(0) from well-watered grass or alfalfa. In the present study, monthly total E(0) for 24 months (January 1987 to December 1988) was calculated from the Penman-Monteith equation, with prescribed albedo of 0.23 and surface resistance of 70 s/m, which are considered to be representative of actively growing well-watered grass covering the ground. These calculations have been done using spatially representative data derived from satellite observations and data assimilation results. Satellite observations were used to obtain solar radiation, fractional cloud cover, air temperature, and vapor pressure, while four-dimensional data assimilation results were used to calculate the aerodynamic resistance. Meteorologic data derived from satellite observations were compared with the surface measurements to provide a measure of accuracy. The accuracy of the calculated E(0) values was assessed by comparing with lysimeter observations for evaporation from well-watered grass at 35 widely distributed locations, while recognizing that the period of present calculations was not concurrent with the lysimeter measurements and the spatial scales of these measurements and calculations are vastly different. These comparisons suggest that the error in the calculated E(0) values may not be exceeded, on average, 20% for any month or location, but are more likely to be about 15%. These uncertainties are difficult to quantify for mountainous areas or locations close to extensive water bodies. The difference between the calculated and observed E(0) is about 5% when all month and locations were considered. Errors are expected to be less than 15% for averages of E(0) over large areas or several months. Further comparisons with lysimeter observations could provide a better appraisal of the calculated values. Global pattern of E(0) was presented, together with zonal average values.

  13. Simplex volume analysis for finding endmembers in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Song, Meiping; Chang, Chein-I.

    2015-05-01

    Using maximal simplex volume as an optimal criterion for finding endmembers is a common approach and has been widely studied in the literature. Interestingly, very little work has been reported on how simplex volume is calculated. It turns out that the issue of calculating simplex volume is much more complicated and involved than what we may think. This paper investigates this issue from two different aspects, geometric structure and eigen-analysis. The geometric structure is derived from its simplex structure whose volume can be calculated by multiplying its base with its height. On the other hand, eigen-analysis takes advantage of the Cayley-Menger determinant to calculate the simplex volume. The major issue of this approach is that when the matrix is ill-rank where determinant is desired. To deal with this problem two methods are generally considered. One is to perform data dimensionality reduction to make the matrix to be of full rank. The drawback of this method is that the original volume has been shrunk and the found volume of a dimensionality-reduced simplex is not the real original simplex volume. Another is to use singular value decomposition (SVD) to find singular values for calculating simplex volume. The dilemma of this method is its instability in numerical calculations. This paper explores all of these three methods in simplex volume calculation. Experimental results show that geometric structure-based method yields the most reliable simplex volume.

  14. The reliability of vertical jump tests between the Vertec and My Jump phone application.

    PubMed

    Yingling, Vanessa R; Castro, Dimitri A; Duong, Justin T; Malpartida, Fiorella J; Usher, Justin R; O, Jenny

    2018-01-01

    The vertical jump is used to estimate sports performance capabilities and physical fitness in children, elderly, non-athletic and injured individuals. Different jump techniques and measurement tools are available to assess vertical jump height and peak power; however, their use is limited by access to laboratory settings, excessive cost and/or time constraints thus making these tools oftentimes unsuitable for field assessment. A popular field test uses the Vertec and the Sargent vertical jump with countermovement; however, new low cost, easy to use tools are becoming available, including the My Jump iOS mobile application (app). The purpose of this study was to assess the reliability of the My Jump relative to values obtained by the Vertec for the Sargent stand and reach vertical jump (VJ) test. One hundred and thirty-five healthy participants aged 18-39 years (94 males, 41 females) completed three maximal Sargent VJ with countermovement that were simultaneously measured using the Vertec and the My Jump . Jump heights were quantified for each jump and peak power was calculated using the Sayers equation. Four separate ICC estimates and their 95% confidence intervals were used to assess reliability. Two analyses (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, consistency, two-way mixed-effects model, while two others (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, absolute agreement, two-way mixed-effects model. Moderate to excellent reliability relative to the degree of consistency between the Vertec and My Jump values was found for jump height (ICC = 0.813; 95% CI [0.747-0.863]) and calculated peak power (ICC = 0.926; 95% CI [0.897-0.947]). However, poor to good reliability relative to absolute agreement for VJ height (ICC = 0.665; 95% CI [0.050-0.859]) and poor to excellent reliability relative to absolute agreement for peak power (ICC = 0.851; 95% CI [0.272-0.946]) between the Vertec and My Jump values were found; Vertec VJ height, and thus, Vertec calculated peak power values, were significantly higher than those calculated from My Jump values ( p < 0.0001). The My Jump app may provide a reliable measure of vertical jump height and calculated peak power in multiple field and laboratory settings without the need of costly equipment such as force plates or Vertec. The reliability relative to degree of consistency between the Vertec and My Jump app was moderate to excellent. However, the reliability relative to absolute agreement between Vertec and My Jump values contained significant variation (based on CI values), thus, it is recommended that either the My Jump or the Vertec be used to assess VJ height in repeated measures within subjects' designs; these measurement tools should not be considered interchangeable within subjects or in group measurement designs.

  15. The reliability of vertical jump tests between the Vertec and My Jump phone application

    PubMed Central

    Castro, Dimitri A.; Duong, Justin T.; Malpartida, Fiorella J.; Usher, Justin R.; O, Jenny

    2018-01-01

    Background The vertical jump is used to estimate sports performance capabilities and physical fitness in children, elderly, non-athletic and injured individuals. Different jump techniques and measurement tools are available to assess vertical jump height and peak power; however, their use is limited by access to laboratory settings, excessive cost and/or time constraints thus making these tools oftentimes unsuitable for field assessment. A popular field test uses the Vertec and the Sargent vertical jump with countermovement; however, new low cost, easy to use tools are becoming available, including the My Jump iOS mobile application (app). The purpose of this study was to assess the reliability of the My Jump relative to values obtained by the Vertec for the Sargent stand and reach vertical jump (VJ) test. Methods One hundred and thirty-five healthy participants aged 18–39 years (94 males, 41 females) completed three maximal Sargent VJ with countermovement that were simultaneously measured using the Vertec and the My Jump. Jump heights were quantified for each jump and peak power was calculated using the Sayers equation. Four separate ICC estimates and their 95% confidence intervals were used to assess reliability. Two analyses (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, consistency, two-way mixed-effects model, while two others (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, absolute agreement, two-way mixed-effects model. Results Moderate to excellent reliability relative to the degree of consistency between the Vertec and My Jump values was found for jump height (ICC = 0.813; 95% CI [0.747–0.863]) and calculated peak power (ICC = 0.926; 95% CI [0.897–0.947]). However, poor to good reliability relative to absolute agreement for VJ height (ICC = 0.665; 95% CI [0.050–0.859]) and poor to excellent reliability relative to absolute agreement for peak power (ICC = 0.851; 95% CI [0.272–0.946]) between the Vertec and My Jump values were found; Vertec VJ height, and thus, Vertec calculated peak power values, were significantly higher than those calculated from My Jump values (p < 0.0001). Discussion The My Jump app may provide a reliable measure of vertical jump height and calculated peak power in multiple field and laboratory settings without the need of costly equipment such as force plates or Vertec. The reliability relative to degree of consistency between the Vertec and My Jump app was moderate to excellent. However, the reliability relative to absolute agreement between Vertec and My Jump values contained significant variation (based on CI values), thus, it is recommended that either the My Jump or the Vertec be used to assess VJ height in repeated measures within subjects’ designs; these measurement tools should not be considered interchangeable within subjects or in group measurement designs. PMID:29692955

  16. Multiscale mapping of species diversity under changed land use using imaging spectroscopy.

    PubMed

    Paz-Kagan, Tarin; Caras, Tamir; Herrmann, Ittai; Shachak, Moshe; Karnieli, Arnon

    2017-07-01

    Land use changes are one of the most important factors causing environmental transformations and species diversity alterations. The aim of the current study was to develop a geoinformatics-based framework to quantify alpha and beta diversity indices in two sites in Israel with different land uses, i.e., an agricultural system of fruit orchards, an afforestation system of planted groves, and an unmanaged system of groves. The framework comprises four scaling steps: (1) classification of a tree species distribution (SD) map using imaging spectroscopy (IS) at a pixel size of 1 m; (2) estimation of local species richness by calculating the alpha diversity index for 30-m grid cells; (3) calculation of beta diversity for different land use categories and sub-categories at different sizes; and (4) calculation of the beta diversity difference between the two sites. The SD was classified based on a hyperspectral image with 448 bands within the 380-2500 nm spectral range and a spatial resolution of 1 m. Twenty-three tree species were classified with high overall accuracy values of 82.57% and 86.93% for the two sites. Significantly high values of the alpha index characterize the unmanaged land use, and the lowest values were calculated for the agricultural land use. In addition, high values of alpha indices were found at the borders between the polygons related to the "edge-effect" phenomenon, whereas low alpha indices were found in areas with high invasion species rates. The beta index value, calculated for 58 polygons, was significantly lower in the agricultural land use. The suggested framework of this study succeeded in quantifying land use effects on tree species distribution, evenness, and richness. IS and spatial statistics techniques offer an opportunity to study woody plant species variation with a multiscale approach that is useful for managing land use, especially under increasing environmental changes. © 2017 by the Ecological Society of America.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Li, H; Gordon, J

    Purpose: To investigate radiotherapy outcomes by incorporating 4DCT-based physiological and tumor elasticity functions for lung cancer patients. Methods: 4DCT images were acquired from 28 lung SBRT patients before radiation treatment. Deformable image registration (DIR) was performed from the end-inhale to the end-exhale using a B-Spline-based algorithm (Elastix, an open source software package). The resultant displacement vector fields (DVFs) were used to calculate a relative Jacobian function (RV) for each patient. The computed functions in the lung and tumor regions represent lung ventilation and tumor elasticity properties, respectively. The 28 patients were divided into two groups: 16 with two-year tumor localmore » control (LC) and 12 with local failure (LF). The ventilation and elasticity related RV functions were calculated for each of these patients. Results: The LF patients have larger RV values than the LC patients. The mean RV value in the lung region was 1.15 (±0.67) for the LF patients, higher than 1.06 (±0.59) for the LC patients. In the tumor region, the elasticity-related RV values are 1.2 (±0.97) and 0.86 (±0.64) for the LF and LC patients, respectively. Among the 16 LC patients, 3 have the mean RV values greater than 1.0 in the tumors. These tumors were located near the diaphragm, where the displacements are relatively large.. RV functions calculated in the tumor were better correlated with treatment outcomes than those calculated in the lung. Conclusion: The ventilation and elasticity-related RV functions in the lung and tumor regions were calculated from 4DCT image and the resultant values showed differences between the LC and LF patients. Further investigation of the impact of the displacements on the computed RV is warranted. Results suggest that the RV images might be useful for evaluation of treatment outcome for lung cancer patients.« less

  18. ARS-Media: A spreadsheet tool for calculating media recipes based on ion-specific constraints

    USDA-ARS?s Scientific Manuscript database

    ARS-Media is an ion solution calculator that uses Microsoft Excel to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Thus, the recipes are generated using ...

  19. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  20. RVU costing applications.

    PubMed

    Berlin, M F; Faber, B P; Berlin, L M; Budzynski, M R

    1997-11-01

    Relative value unit (RVU) cost accounting which uses the resource-based relative value scale (RBRVS), can be used to determine the cost to produce given services and determine appropriate physician fees. The calculations derived from RVU costing have additional applications, such as analyzing fee schedules, evaluating the profitability of third-party payer reimbursement, calculating a floor capitation rate, and allocating capitation payments within the group. The ability to produce this information can help group practice administrators determine ways to manage the cost of providing services, set more realistic fees, and negotiate more profitable contracts.

  1. Electromagnetic deep-probing (100-1000 kms) of the Earth's interior from artificial satellites: Constraints on the regional emplacement of crustal resources

    NASA Technical Reports Server (NTRS)

    Hermance, J. F. (Principal Investigator)

    1981-01-01

    A spherical harmonic analysis program is being tested which takes magnetic data in universal time from a set of arbitrarily space observatories and calculates a value for the instantaneous magnetic field at any point on the globe. The calculation is done as a least mean-squares value fit to a set of spherical harmonics up to any desired order. The program accepts as a set of input the orbit position of a satellite coordinates it with ground-based magnetic data for a given time. The output is a predicted time series for the magnetic field on the Earth's surface at the (r, theta) position directly under the hypothetically orbiting satellite for the duration of the time period of the input data set. By tracking the surface magnetic field beneath the satellite, narrow-band averages crosspowers between the spatially coordinated satellite and the ground-based data sets are computed. These crosspowers are used to calculate field transfer coefficients with minimum noise distortion. The application of this technique to calculating the vector response function W is discussed.

  2. Quantifying Physician Teaching Productivity Using Clinical Relative Value Units

    PubMed Central

    Yeh, Michael M; Cahill, Daniel F

    1999-01-01

    OBJECTIVE To design and test a customizable system for calculating physician teaching productivity based on clinical relative value units (RVUs). SETTING/PARTICIPANTS A 550-bed community teaching hospital with 11 part-time faculty general internists. DESIGN Academic year 1997–98 educational activities were analyzed with an RVU-based system using teaching value multipliers (TVMs). The TVM is the ratio of the value of a unit of time spent teaching to the equivalent time spent in clinical practice. We assigned TVMs to teaching tasks based on their educational value and complexity. The RVUs of a teaching activity would be equal to its TVM multiplied by its duration and by the regional median clinical RVU production rate. MEASUREMENTS The faculty members' total annual RVUs for teaching were calculated and compared with the RVUs they would have earned had they spent the same proportion of time in clinical practice. MAIN RESULTS For the same proportion of time, the faculty physicians would have generated 29,806 RVUs through teaching or 27,137 RVUs through clinical practice (Absolute difference = 2,669 RVUs; Relative excess = 9.8%). CONCLUSIONS We describe an easily customizable method of quantifying physician teaching productivity in terms of clinical RVUs. This system allows equitable recognition of physician efforts in both the educational and clinical arenas. PMID:10571707

  3. THEORETICAL RESEARCH OF THE OPTICAL SPECTRA AND EPR PARAMETERS FOR Cs2NaYCl6:Dy3+ CRYSTAL

    NASA Astrophysics Data System (ADS)

    Dong, Hui-Ning; Dong, Meng-Ran; Li, Jin-Jin; Li, Deng-Feng; Zhang, Yi

    2013-09-01

    The calculated EPR parameters are in reasonable agreement with the observed values. The important material Cs2NaYCl6 doped with rare earth ions have received much attention because of its excellent optical and magnetic properties. Based on the superposition model, in this paper the crystal field energy levels, the electron paramagnetic resonance parameters g factors of Dy3+ and hyperfine structure constants of 161Dy3+ and 163Dy3+ isotopes in Cs2NaYCl6 crystal are studied by diagonalizing the 42 × 42 energy matrix. In the calculations, the contributions of various admixtures and interactions such as the J-mixing, the mixtures among the states with the same J-value, and the covalence are all considered. The calculated results are in reasonable agreement with the observed values. The results are discussed.

  4. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    PubMed

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Microstructure-Sensitive Extreme Value Probabilities for High Cycle Fatigue of Ni-Base Superalloy IN100 (Preprint)

    DTIC Science & Technology

    2009-03-01

    transition fatigue regimes; however, microplasticity (i.e., heterogeneous plasticity at the scale of microstructure) is relevant to understanding fatigue...and Socie [57] considered the affect of microplastic 14 Microstructure-Sensitive Extreme Value Probabilities for High Cycle Fatigue of Ni-Base...considers the local stress state as affected by intergranular interactions and microplasticity . For the calculations given below, the volumes over which

  6. Using 3d Bim Model for the Value-Based Land Share Calculations

    NASA Astrophysics Data System (ADS)

    Çelik Şimşek, N.; Uzun, B.

    2017-11-01

    According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.

  7. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  8. Improvement of fire-tube boilers calculation methods by the numerical modeling of combustion processes and heat transfer in the combustion chamber

    NASA Astrophysics Data System (ADS)

    Komarov, I. I.; Rostova, D. M.; Vegera, A. N.

    2017-11-01

    This paper presents the results of study on determination of degree and nature of influence of operating conditions of burner units and flare geometric parameters on the heat transfer in a combustion chamber of the fire-tube boilers. Change in values of the outlet gas temperature, the radiant and convective specific heat flow rate with appropriate modification of an expansion angle and a flare length was determined using Ansys CFX software package. Difference between values of total heat flow and bulk temperature of gases at the flue tube outlet calculated using the known methods for thermal calculation and defined during the mathematical simulation was determined. Shortcomings of used calculation methods based on the results of a study conducted were identified and areas for their improvement were outlined.

  9. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. II - Quantitative analysis

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1989-01-01

    It is possible to calculate expectation values and transition probabilities from the Wigner phase-space distribution function. Based on the canonical transformation properties of the Wigner function, an algorithm is developed for calculating these quantities in quantum optics for coherent and squeezed states. It is shown that the expectation value of a dynamical variable can be written in terms of its vacuum expectation value of the canonically transformed variable. Parallel-axis theorems are established for the photon number and its variant. It is also shown that the transition probability between two squeezed states can be reduced to that of the transition from one squeezed state to vacuum.

  10. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  11. Adjacent bin stability evaluating for feature description

    NASA Astrophysics Data System (ADS)

    Nie, Dongdong; Ma, Qinyong

    2018-04-01

    Recent study improves descriptor performance by accumulating stability votes for all scale pairs to compose the local descriptor. We argue that the stability of a bin depends on the differences across adjacent pairs more than the differences across all scale pairs, and a new local descriptor is composed based on the hypothesis. A series of SIFT descriptors are extracted from multiple scales firstly. Then the difference value of the bin across adjacent scales is calculated, and the stability value of a bin is calculated based on it and accumulated to compose the final descriptor. The performance of the proposed method is evaluated with two popular matching datasets, and compared with other state-of-the-art works. Experimental results show that the proposed method performs satisfactorily.

  12. Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de

    2016-02-07

    We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less

  13. An X-ray fluorescence spectrometer and its applications in materials studies

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Han, K. S.

    1977-01-01

    An X-ray fluorescence system based on a Co(57) gamma-ray source has been developed. The system was used to calculate the atomic percentages of iron implanted in titanium targets. Measured intensities of Fe (k-alpha + k-beta) and Ti (k-alpha + k-beta) X-rays from the Fe-Ti targets are in good agreement with the calculated values based on photoelectric cross sections of Ti and Fe for the Co(57) gamma rays.

  14. An ecological compensation standard based on emergy theory for the Xiao Honghe River Basin.

    PubMed

    Guan, Xinjian; Chen, Moyu; Hu, Caihong

    2015-01-01

    The calculation of an ecological compensation standard is an important, but also difficult aspect of current ecological compensation research. In this paper, the factors affecting the ecological-economic system in the Xiao Honghe River Basin, China, including the flow of energy, materials, and money, were calculated using the emergy analysis method. A consideration of the relationships between the ecological-economic value of water resources and ecological compensation allowed the ecological-economic value to be calculated. On this basis, the amount of water needed for dilution was used to develop a calculation model for the ecological compensation standard of the basin. Using the Xiao Honghe River Basin as an example, the value of water resources and the ecological compensation standard were calculated using this model according to the emission levels of the main pollutant in the basin, chemical oxygen demand. The compensation standards calculated for the research areas in Xipin, Shangcai, Pingyu, and Xincai were 34.91 yuan/m3, 32.97 yuan/m3, 35.99 yuan/m3, and 34.70 yuan/m3, respectively, and such research output would help to generate and support new approaches to the long-term ecological protection of the basin and improvement of the ecological compensation system.

  15. Determining Risk of Falls in Community Dwelling Older Adults: A Systematic Review and Meta-analysis Using Posttest Probability.

    PubMed

    Lusardi, Michelle M; Fritz, Stacy; Middleton, Addie; Allison, Leslie; Wingood, Mariana; Phillips, Emma; Criss, Michelle; Verma, Sangita; Osborne, Jackie; Chui, Kevin K

    Falls and their consequences are significant concerns for older adults, caregivers, and health care providers. Identification of fall risk is crucial for appropriate referral to preventive interventions. Falls are multifactorial; no single measure is an accurate diagnostic tool. There is limited information on which history question, self-report measure, or performance-based measure, or combination of measures, best predicts future falls. First, to evaluate the predictive ability of history questions, self-report measures, and performance-based measures for assessing fall risk of community-dwelling older adults by calculating and comparing posttest probability (PoTP) values for individual test/measures. Second, to evaluate usefulness of cumulative PoTP for measures in combination. To be included, a study must have used fall status as an outcome or classification variable, have a sample size of at least 30 ambulatory community-living older adults (≥65 years), and track falls occurrence for a minimum of 6 months. Studies in acute or long-term care settings, as well as those including participants with significant cognitive or neuromuscular conditions related to increased fall risk, were excluded. Searches of Medline/PubMED and Cumulative Index of Nursing and Allied Health (CINAHL) from January 1990 through September 2013 identified 2294 abstracts concerned with fall risk assessment in community-dwelling older adults. Because the number of prospective studies of fall risk assessment was limited, retrospective studies that classified participants (faller/nonfallers) were also included. Ninety-five full-text articles met inclusion criteria; 59 contained necessary data for calculation of PoTP. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) was used to assess each study's methodological quality. Study design and QUADAS score determined the level of evidence. Data for calculation of sensitivity (Sn), specificity (Sp), likelihood ratios (LR), and PoTP values were available for 21 of 46 measures used as search terms. An additional 73 history questions, self-report measures, and performance-based measures were used in included articles; PoTP values could be calculated for 35. Evidence tables including PoTP values were constructed for 15 history questions, 15 self-report measures, and 26 performance-based measures. Recommendations for clinical practice were based on consensus. Variations in study quality, procedures, and statistical analyses challenged data extraction, interpretation, and synthesis. There was insufficient data for calculation of PoTP values for 63 of 119 tests. No single test/measure demonstrated strong PoTP values. Five history questions, 2 self-report measures, and 5 performance-based measures may have clinical usefulness in assessing risk of falling on the basis of cumulative PoTP. Berg Balance Scale score (≤50 points), Timed Up and Go times (≥12 seconds), and 5 times sit-to-stand times (≥12) seconds are currently the most evidence-supported functional measures to determine individual risk of future falls. Shortfalls identified during review will direct researchers to address knowledge gaps.

  16. Dose Calculation For Accidental Release Of Radioactive Cloud Passing Over Jeddah

    NASA Astrophysics Data System (ADS)

    Alharbi, N. D.; Mayhoub, A. B.

    2011-12-01

    For the evaluation of doses after the reactor accident, in particular for the inhalation dose, a thorough knowledge of the concentration of the various radionuclide in air during the passage of the plume is required. In this paper we present an application of the Gaussian Plume Model (GPM) to calculate the atmospheric dispersion and airborne radionuclide concentration resulting from radioactive cloud over the city of Jeddah (KSA). The radioactive cloud is assumed to be emitted from a reactor of 10 MW power in postulated accidental release. Committed effective doses (CEDs) to the public at different distance from the source to the receptor are calculated. The calculations were based on meteorological condition and data of the Jeddah site. These data are: pasquill atmospheric stability is the class B and the wind speed is 2.4m/s at 10m height in the N direction. The residence time of some radionuclides considered in this study were calculated. The results indicate that, the values of doses first increase with distance, reach a maximum value and then gradually decrease. The total dose received by human is estimated by using the estimated values of residence time of each radioactive pollutant at different distances.

  17. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  18. Computational Investigation of the pH Dependence of Loop Flexibility and Catalytic Function in Glycoside Hydrolases*

    PubMed Central

    Bu, Lintao; Crowley, Michael F.; Himmel, Michael E.; Beckham, Gregg T.

    2013-01-01

    Cellulase enzymes cleave glycosidic bonds in cellulose to produce cellobiose via either retaining or inverting hydrolysis mechanisms, which are significantly pH-dependent. Many fungal cellulases function optimally at pH ∼5, and their activities decrease dramatically at higher or lower pH. To understand the molecular-level implications of pH in cellulase structure, we use a hybrid, solvent-based, constant pH molecular dynamics method combined with pH-based replica exchange to determine the pKa values of titratable residues of a glycoside hydrolase (GH) family 6 cellobiohydrolase (Cel6A) and a GH family 7 cellobiohydrolase (Cel7A) from the fungus Hypocrea jecorina. For both enzymes, we demonstrate that a bound substrate significantly affects the pKa values of the acid residues at the catalytic center. The calculated pKa values of catalytic residues confirm their proposed roles from structural studies and are consistent with the experimentally measured apparent pKa values. Additionally, GHs are known to impart a strained pucker conformation in carbohydrate substrates in active sites for catalysis, and results from free energy calculations combined with constant pH molecular dynamics suggest that the correct ring pucker is stable near the optimal pH for both Cel6A and Cel7A. Much longer molecular dynamics simulations of Cel6A and Cel7A with fixed protonation states based on the calculated pKa values suggest that pH affects the flexibility of tunnel loops, which likely affects processivity and substrate complexation. Taken together, this work demonstrates several molecular-level effects of pH on GH enzymes important for cellulose turnover in the biosphere and relevant to biomass conversion processes. PMID:23504310

  19. Calculation of global carbon dioxide emissions: Review of emission factors and a new approach taking fuel quality into consideration

    NASA Astrophysics Data System (ADS)

    Hiete, Michael; Berner, Ulrich; Richter, Otto

    2001-03-01

    Anthropogenic carbon dioxide emissions resulting from fossil fuel consumption play a major role in the current debate on climate change. Carbon dioxide emissions are calculated on the basis of a carbon dioxide emission factor (CEF) for each type of fuel. Published CEFs are reviewed in this paper. It was found that for nearly all CEFs, fuel quality is not adequately taken into account. This is especially true in the case of the CEFs for coal. Published CEFs are often based on generalized assumptions and inexact conversions. In particular, conversions from gross calorific value to net calorific value were examined. A new method for determining CEFs as a function of calorific value (for coal, peat, and natural gas) and specific gravity (for crude oil) is presented that permits CEFs to be calculated for specific fuel qualities. A review of proportions of fossil fuels that remain unoxidized owing to incomplete combustion or inclusion in petrochemical products, etc., (stored carbon) shows that these figures need to be updated and checked for their applicability on a global scale, since they are mostly based on U.S. data.

  20. pKa predictions for proteins, RNAs, and DNAs with the Gaussian dielectric function using DelPhi pKa.

    PubMed

    Wang, Lin; Li, Lin; Alexov, Emil

    2015-12-01

    We developed a Poisson-Boltzmann based approach to calculate the pKa values of protein ionizable residues (Glu, Asp, His, Lys and Arg), nucleotides of RNA and single stranded DNA. Two novel features were utilized: the dielectric properties of the macromolecules and water phase were modeled via the smooth Gaussian-based dielectric function in DelPhi and the corresponding electrostatic energies were calculated without defining the molecular surface. We tested the algorithm by calculating pKa values for more than 300 residues from 32 proteins from the PPD dataset and achieved an overall RMSD of 0.77. Particularly, the RMSD of 0.55 was achieved for surface residues, while the RMSD of 1.1 for buried residues. The approach was also found capable of capturing the large pKa shifts of various single point mutations in staphylococcal nuclease (SNase) from pKa-cooperative dataset, resulting in an overall RMSD of 1.6 for this set of pKa's. Investigations showed that predictions for most of buried mutant residues of SNase could be improved by using higher dielectric constant values. Furthermore, an option to generate different hydrogen positions also improves pKa predictions for buried carboxyl residues. Finally, the pKa calculations on two RNAs demonstrated the capability of this approach for other types of biomolecules. © 2015 Wiley Periodicals, Inc.

  1. Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Jerden, James L.; Ebert, William L.

    The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less

  2. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  3. 17 CFR 270.30b1-6T - Weekly portfolio report for certain money market funds.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...; (I) The amortized cost value; and (J) In the case of a tax-exempt security, whether there is a demand... the fund's stable net asset value per share or stable price per share pursuant to § 270.2a-7(c)(1...) Market-based NAV means a money market fund's net asset value per share calculated using available market...

  4. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  5. SU-E-J-32: Calypso(R) and Laser-Based Localization Systems Comparison for Left-Sided Breast Cancer Patients Using Deep Inspiration Breath Hold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, S; Kaurin, D; Sweeney, L

    2014-06-01

    Purpose: Our institution uses a manual laser-based system for primary localization and verification during radiation treatment of left-sided breast cancer patients using deep inspiration breath hold (DIBH). This primary system was compared with sternum-placed Calypso(R) beacons (Varian Medical Systems, CA). Only intact breast patients are considered for this analysis. Methods: During computed tomography (CT) simulation, patients have BB and Calypso(R) surface beacons positioned sternally and marked for free-breathing and DIBH CTs. During dosimetry planning, BB longitudinal displacement between free breathing and DIBH CT determines laser mark (BH mark) location. Calypso(R) beacon locations from the DIBH CT are entered at themore » Tracking Station. During Linac simulation and treatment, patients inhale until the cross-hair and/or lasers coincide with the BH Mark, which can be seen using our high quality cameras (Pelco, CA). Daily Calypso(R) displacement values (difference from the DIBH-CT-based plan) are recorded.The displacement mean and standard deviation was calculated for each patient (77 patients, 1845 sessions). An aggregate mean and standard deviation was calculated weighted by the number of patient fractions.Some patients were shifted based on MV ports. A second data set was calculated with Calypso(R) values corrected by these shifts. Results: Mean displacement values indicate agreement within 1±3mm, with improvement for shifted data (Table). Conclusion: Both unshifted and shifted data sets show the Calypso(R) system coincides with the laser system within 1±3mm, demonstrating either localization/verification system will Resultin similar clinical outcomes. Displacement value uncertainty unilaterally reduces when shifts are taken into account.« less

  6. Protein dielectric constants determined from NMR chemical shift perturbations.

    PubMed

    Kukic, Predrag; Farrell, Damien; McIntosh, Lawrence P; García-Moreno E, Bertrand; Jensen, Kristine Steen; Toleikis, Zigmantas; Teilum, Kaare; Nielsen, Jens Erik

    2013-11-13

    Understanding the connection between protein structure and function requires a quantitative understanding of electrostatic effects. Structure-based electrostatic calculations are essential for this purpose, but their use has been limited by a long-standing discussion on which value to use for the dielectric constants (ε(eff) and ε(p)) required in Coulombic and Poisson-Boltzmann models. The currently used values for ε(eff) and ε(p) are essentially empirical parameters calibrated against thermodynamic properties that are indirect measurements of protein electric fields. We determine optimal values for ε(eff) and ε(p) by measuring protein electric fields in solution using direct detection of NMR chemical shift perturbations (CSPs). We measured CSPs in 14 proteins to get a broad and general characterization of electric fields. Coulomb's law reproduces the measured CSPs optimally with a protein dielectric constant (ε(eff)) from 3 to 13, with an optimal value across all proteins of 6.5. However, when the water-protein interface is treated with finite difference Poisson-Boltzmann calculations, the optimal protein dielectric constant (ε(p)) ranged from 2 to 5 with an optimum of 3. It is striking how similar this value is to the dielectric constant of 2-4 measured for protein powders and how different it is from the ε(p) of 6-20 used in models based on the Poisson-Boltzmann equation when calculating thermodynamic parameters. Because the value of ε(p) = 3 is obtained by analysis of NMR chemical shift perturbations instead of thermodynamic parameters such as pK(a) values, it is likely to describe only the electric field and thus represent a more general, intrinsic, and transferable ε(p) common to most folded proteins.

  7. Estimated net acid excretion inversely correlates with urine pH in vegans, lacto-ovo vegetarians, and omnivores.

    PubMed

    Ausman, Lynne M; Oliver, Lauren M; Goldin, Barry R; Woods, Margo N; Gorbach, Sherwood L; Dwyer, Johanna T

    2008-09-01

    Diet affects urine pH and acid-base balance. Both excess acid/alkaline ash (EAA) and estimated net acid excretion (NAE) calculations have been used to estimate the effects of diet on urine pH. This study's goal was to determine if free-living vegans, lacto-ovo vegetarians, and omnivores have increasingly acidic urine, and to assess the ability of EAA and estimated NAE calculations to predict urine pH. This study used a cross-sectional design. This study assessed urine samples of 10 vegan, 16 lacto-ovo vegetarian, and 16 healthy omnivorous women in the Boston metropolitan area. Six 3-day food records from each dietary group were analyzed for EAA content and estimated NAE, and correlations with measured urine pH were calculated. The mean (+/- SD) urine pH was 6.15 +/- 0.40 for vegans, 5.90 +/- 0.36 for lacto-ovo vegetarians, and 5.74 +/- 0.21 for omnivores (analysis of variance, P = .013). Calculated EAA values were not significantly different among the three groups, whereas mean estimated NAE values were significantly different: 17.3 +/- 14.5 mEq/day for vegans, 31.3 +/- 8.5 mEq/day for lacto-ovo vegetarians, and 42.6 +/- 13.2 mEq/day for omnivores (analysis of variance, P = .01). The average deattenuated correlation between urine pH and EAA was 0.333; this value was -0.768 for estimated NAE and urine pH, with a regression equation of pH = 6.33 - 0.014 NAE (P = .02, r = -0.54). Habitual diet and estimated NAE calculations indicate the probable ranking of urine pH by dietary groups, and may be used to determine the likely acid-base status of an individual; EAA calculations were not predictive of urine pH.

  8. Development of a Fragment-Based in Silico Profiler for Michael Addition Thiol Reactivity.

    PubMed

    Ebbrell, David J; Madden, Judith C; Cronin, Mark T D; Schultz, Terry W; Enoch, Steven J

    2016-06-20

    The Adverse Outcome Pathway (AOP) paradigm details the existing knowledge that links the initial interaction between a chemical and a biological system, termed the molecular initiating event (MIE), through a series of intermediate events, to an adverse effect. An important example of a well-defined MIE is the formation of a covalent bond between a biological nucleophile and an electrophilic compound. This particular MIE has been associated with various toxicological end points such as acute aquatic toxicity, skin sensitization, and respiratory sensitization. This study has investigated the calculated parameters that are required to predict the rate of chemical bond formation (reactivity) of a dataset of Michael acceptors. Reactivity of these compounds toward glutathione was predicted using a combination of a calculated activation energy value (Eact, calculated using density functional theory (DFT) calculation at the B3YLP/6-31G+(d) level of theory, and solvent-accessible surface area values (SAS) at the α carbon. To further develop the method, a fragment-based algorithm was developed enabling the reactivity to be predicted for Michael acceptors without the need to perform the time-consuming DFT calculations. Results showed the developed fragment method was successful in predicting the reactivity of the Michael acceptors excluding two sets of chemicals: volatile esters with an extended substituent at the β-carbon and chemicals containing a conjugated benzene ring as part of the polarizing group. Additionally the study also demonstrated the ease with which the approach can be extended to other chemical classes by the calculation of additional fragments and their associated Eact and SAS values. The resulting method is likely to be of use in regulatory toxicology tools where an understanding of covalent bond formation as a potential MIE is important within the AOP paradigm.

  9. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  10. Thermodynamics of surface defects at the aspirin/water interface

    NASA Astrophysics Data System (ADS)

    Schneider, Julian; Zheng, Chen; Reuter, Karsten

    2014-09-01

    We present a simulation scheme to calculate defect formation free energies at a molecular crystal/water interface based on force-field molecular dynamics simulations. To this end, we adopt and modify existing approaches to calculate binding free energies of biological ligand/receptor complexes to be applicable to common surface defects, such as step edges and kink sites. We obtain statistically accurate and reliable free energy values for the aspirin/water interface, which can be applied to estimate the distribution of defects using well-established thermodynamic relations. As a show case we calculate the free energy upon dissolving molecules from kink sites at the interface. This free energy can be related to the solubility concentration and we obtain solubility values in excellent agreement with experimental results.

  11. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    PubMed

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  12. Modeling and Ab initio Calculations of Thermal Transport in Si-Based Clathrates and Solar Perovskites

    NASA Astrophysics Data System (ADS)

    He, Yuping

    2015-03-01

    We present calculations of the thermal transport coefficients of Si-based clathrates and solar perovskites, as obtained from ab initio calculations and models, where all input parameters derived from first principles. We elucidated the physical mechanisms responsible for the measured low thermal conductivity in Si-based clatherates and predicted their electronic properties and mobilities, which were later confirmed experimentally. We also predicted that by appropriately tuning the carrier concentration, the thermoelectric figure of merit of Sn and Pb based perovskites may reach values ranging between 1 and 2, which could possibly be further increased by optimizing the lattice thermal conductivity through engineering perovskite superlattices. Work done in collaboration with Prof. G. Galli, and supported by DOE/BES Grant No. DE-FG0206ER46262.

  13. Elastic-Plastic Fracture Mechanics Analysis of Critical Flaw Size in ARES I-X Flange-to-Skin Welds

    NASA Technical Reports Server (NTRS)

    Chell, G. Graham; Hudak, Stephen J., Jr.

    2008-01-01

    NASA's Ares 1 Upper Stage Simulator (USS) is being fabricated from welded A516 steel. In order to insure the structural integrity of these welds it is of interest to calculate the critical initial flaw size (CIFS) to establish rational inspection requirements. The CIFS is in turn dependent on the critical final flaw size (CFS), as well as fatigue flaw growth resulting from transportation, handling and service-induced loading. These calculations were made using linear elastic fracture mechanics (LEFM), which are thought to be conservative because they are based on a lower bound, so called elastic, fracture toughness determined from tests that displayed significant plasticity. Nevertheless, there was still concern that the yield magnitude stresses generated in the flange-to-skin weld by the combination of axial stresses due to axial forces, fit-up stresses, and weld residual stresses, could give rise to significant flaw-tip plasticity, which might render the LEFM results to be non-conservative. The objective of the present study was to employ Elastic Plastic Fracture Mechanics (EPFM) to determine CFS values, and then compare these values to CFS values evaluated using LEFM. CFS values were calculated for twelve cases involving surface and embedded flaws, EPFM analyses with and without plastic shakedown of the stresses, LEFM analyses, and various welding residual stress distributions. For the cases examined, the computed CFS values based on elastic analyses were the smallest in all instances where the failures were predicted to be controlled by the fracture toughness. However, in certain cases, the CFS values predicted by the elastic-plastic analyses were smaller than those predicted by the elastic analyses; in these cases the failure criteria were determined by a breakdown in stress intensity factor validity limits for deep flaws (a greater than 0.90t), rather than by the fracture toughness. Plastic relaxation of stresses accompanying shakedown always increases the calculated CFS values compared to the CFS values determined without shakedown. Thus, it is conservative to ignore shakedown effects.

  14. Cable Overheating Risk Warning Method Based on Impedance Parameter Estimation in Distribution Network

    NASA Astrophysics Data System (ADS)

    Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao

    2017-05-01

    Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.

  15. Predicting the hydroxymethylation rate of phenols with formaldehyde by molecular orbital calculation.

    Treesearch

    Tohru Mitsunaga; Anthony H. Conner; Charles G. Hill

    2002-01-01

    The rates (k) of hydroxymethylation of phenol, resorcinol. phloroglucinol, and several methylphenols in diluted 10% dimethylformamide aqueous alkaline solution were calculated based on the consumption of phenols and formaldehyde. The k values of phloroglucinol and resorcinol relative to that of phenol were about 62000 and 1200 times, respectively. The phenols that have...

  16. Effect of genome sequence on the force-induced unzipping of a DNA molecule.

    PubMed

    Singh, N; Singh, Y

    2006-02-01

    We considered a dsDNA polymer in which distribution of bases are random at the base pair level but ordered at a length of 18 base pairs and calculated its force elongation behaviour in the constant extension ensemble. The unzipping force F(y) vs. extension y is found to have a series of maxima and minima. By changing base pairs at selected places in the molecule we calculated the change in F(y) curve and found that the change in the value of force is of the order of few pN and the range of the effect depending on the temperature, can spread over several base pairs. We have also discussed briefly how to calculate in the constant force ensemble a pause or a jump in the extension-time curve from the knowledge of F(y).

  17. First principles and experimental study of the electronic structure and phase stability of bulk thallium bromide

    NASA Astrophysics Data System (ADS)

    Smith, Holland M.; Zhou, Yuzhi; Ciampi, Guido; Kim, Hadong; Cirignano, Leonard J.; Shah, Kanai S.; Haller, E. E.; Chrzan, D. C.

    2013-08-01

    We apply state-of-art first principle calculations to study the polymorphism and electronic structure of three previously reported phases of TlBr. The calculated band structures of NaCl-structure phase and orthorhombic-structure phase have different features than that of commonly observed CsCl-structure phase. We further interpret photoluminescence spectra based on our calculations. Several peaks close to calculated band gap values of the NaCl-structure phase and the orthorhombic-structure phase are found in unpolished TlBr samples.

  18. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  19. 19 CFR 351.403 - Sales used in calculating normal value; transactions between affiliated parties.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Sales used in calculating normal value... ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and Normal Value § 351.403 Sales used in calculating normal value...

  20. Prediction of surface tension of HFD-like fluids using the Fowler’s approximation

    NASA Astrophysics Data System (ADS)

    Goharshadi, Elaheh K.; Abbaspour, Mohsen

    2006-09-01

    The Fowler's expression for calculation of the reduced surface tension has been used for simple fluids using the Hartree-Fock Dispersion (HFD)-like potential (HFD-like fluids) obtained from the inversion of the viscosity collision integrals at zero pressure. In order to obtain the RDFs values needed for calculation of the surface tension, we have performed the MD simulation at different temperatures and densities and then fitted with an expression and compared the resulting RDFs with the experiment. Our results are in excellent accordance with experimental values when the vapor density has been considered, especially at high temperatures. We have also calculated the surface tension using a RDF's expression based on the Lennard-Jones (LJ) potential which was in good agreement with the molecular dynamics simulations. In this work, we have shown that our results based on HFD-like potential can describe the temperature dependence of the surface tension superior than that of LJ potential.

  1. Design and Construction of a Microcontroller-Based Ventilator Synchronized with Pulse Oximeter.

    PubMed

    Gölcük, Adem; Işık, Hakan; Güler, İnan

    2016-07-01

    This study aims to introduce a novel device with which mechanical ventilator and pulse oximeter work in synchronization. Serial communication technique was used to enable communication between the pulse oximeter and the ventilator. The SpO2 value and the pulse rate read on the pulse oximeter were transmitted to the mechanical ventilator through transmitter (Tx) and receiver (Rx) lines. The fuzzy-logic-based software developed for the mechanical ventilator interprets these values and calculates the percentage of oxygen (FiO2) and Positive End-Expiratory Pressure (PEEP) to be delivered to the patient. The fuzzy-logic-based software was developed to check the changing medical states of patients and to produce new results (FiO2 ve PEEP) according to each new state. FiO2 and PEEP values delivered from the ventilator to the patient can be calculated in this way without requiring any arterial blood gas analysis. Our experiments and the feedbacks from physicians show that this device makes it possible to obtain more successful results when compared to the current practices.

  2. Shading correction for cone-beam CT in radiotherapy: validation of dose calculation accuracy using clinical images

    NASA Astrophysics Data System (ADS)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2017-03-01

    Cone-beam CT (CBCT) images are routinely acquired to verify patient position in radiotherapy (RT), but are typically not calibrated in Hounsfield Units (HU) and feature non-uniformity due to X-ray scatter and detector persistence effects. This prevents direct use of CBCT for re-calculation of RT delivered dose. We previously developed a prior-image based correction method to restore HU values and improve uniformity of CBCT images. Here we validate the accuracy with which corrected CBCT can be used for dosimetric assessment of RT delivery, using CBCT images and RT plans for 45 patients including pelvis, lung and head sites. Dose distributions were calculated based on each patient's original RT plan and using CBCT image values for tissue heterogeneity correction. Clinically relevant dose metrics were calculated (e.g. median and minimum target dose, maximum organ at risk dose). Accuracy of CBCT based dose metrics was determined using an "override ratio" method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the image is assumed to be constant for each patient, allowing comparison to "gold standard" CT. For pelvis and head images the proportion of dose errors >2% was reduced from 40% to 1.3% after applying shading correction. For lung images the proportion of dose errors >3% was reduced from 66% to 2.2%. Application of shading correction to CBCT images greatly improves their utility for dosimetric assessment of RT delivery, allowing high confidence that CBCT dose calculations are accurate within 2-3%.

  3. Low-energy proton induced M X-ray production cross sections for 70Yb, 81Tl and 82Pb

    NASA Astrophysics Data System (ADS)

    Shehla; Mandal, A.; Kumar, Ajay; Roy Chowdhury, M.; Puri, Sanjiv; Tribedi, L. C.

    2018-07-01

    The cross sections for production of Mk (k = Mξ, Mαβ, Mγ, Mm1) X-rays of 70Yb, 81Tl and 82Pb induced by 50-250 keV protons have been measured in the present work. The experimental cross sections have been compared with the earlier reported values and those calculated using the ionization cross sections based on the ECPSSR (Perturbed (P) stationary(S) state(S), incident ion energy (E) loss, Coulomb (C) deflection and relativistic (R) correction) model, the X-ray emission rates based on the Dirac-Fock model, the fluorescence and Coster-Kronig yields based on the Dirac-Hartree-Slater (DHS) model. In addition, the present measured proton induced X-ray production cross sections have also been compared with those calculated using the Dirac-Hartree-Slater (DHS) model based ionization cross sections and those based on the Plane wave Born Approximation (PWBA). The measured M X-ray production cross sections are, in general, found to be higher than the ECPSSR and DHS model based values and lower than the PWBA model based cross sections.

  4. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-MOX, R2-UO2 and MORGANE/R configurations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Klann, R. T.; Nuclear Engineering Division

    2007-08-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.

  5. A real-time monitoring and assessment method for calculation of total amounts of indoor air pollutants emitted in subway stations.

    PubMed

    Oh, TaeSeok; Kim, MinJeong; Lim, JungJin; Kang, OnYu; Shetty, K Vidya; SankaraRao, B; Yoo, ChangKyoo; Park, Jae Hyung; Kim, Jeong Tai

    2012-05-01

    Subway systems are considered as main public transportation facility in developed countries. Time spent by people in indoors, such as underground spaces, subway stations, and indoor buildings, has gradually increased in the recent past. Especially, operators or old persons who stay in indoor environments more than 15 hr per day usually influenced a greater extent by indoor air pollutants. Hence, regulations on indoor air pollutants are needed to ensure good health of people. Therefore, in this study, a new cumulative calculation method for the estimation of total amounts of indoor air pollutants emitted inside the subway station is proposed by taking cumulative amounts of indoor air pollutants based on integration concept. Minimum concentration of individual air pollutants which naturally exist in indoor space is referred as base concentration of air pollutants and can be found from the data collected. After subtracting the value of base concentration from data point of each data set of indoor air pollutant, the primary quantity of emitted air pollutant is calculated. After integration is carried out with these values, adding the base concentration to the integration quantity gives the total amount of indoor air pollutant emitted. Moreover the values of new index for cumulative indoor air quality obtained for 1 day are calculated using the values of cumulative air quality index (CAI). Cumulative comprehensive indoor air quality index (CCIAI) is also proposed to compare the values of cumulative concentrations of indoor air pollutants. From the results, it is clear that the cumulative assessment approach of indoor air quality (IAQ) is useful for monitoring the values of total amounts of indoor air pollutants emitted, in case of exposure to indoor air pollutants for a long time. Also, the values of CCIAI are influenced more by the values of concentration of NO2, which is released due to the use of air conditioners and combustion of the fuel. The results obtained in this study confirm that the proposed method can be applied to monitor total amounts of indoor air pollutants emitted, inside apartments and hospitals as well. Nowadays, subway systems are considered as main public transportation facility in developed countries. Time spent by people in indoors, such as underground spaces, subway stations, and indoor buildings, has gradually increased in the recent past. Especially, operators or old persons who stay in the indoor environments more than 15 hr per day usually influenced a greater extent by indoor air pollutants. Hence, regulations on indoor air pollutants are needed to ensure good health of people. Therefore, this paper presents a new methodology for monitoring and assessing total amounts of indoor air pollutants emitted inside underground spaces and subway stations. A new methodology for the calculation of cumulative amounts of indoor air pollutants based on integration concept is proposed. The results suggest that the cumulative assessment approach of IAQ is useful for monitoring the values of total amounts of indoor air pollutants, if indoor air pollutants accumulated for a long time, especially NO2 pollutants. The results obtained here confirm that the proposed method can be applied to monitor total amounts of indoor air pollutants emitted, inside apartments and hospitals as well.

  6. A Method for Establishing a Depreciated Monetary Value for Print Collections.

    ERIC Educational Resources Information Center

    Marman, Edward

    1995-01-01

    Outlines a method for establishing a depreciated value of a library collection and includes an example of applying the formula for calculating depreciation. The method is based on the useful life of books, other print, and audio visual materials; their original cost; and on sampling subsets or sections of the collection. (JKP)

  7. Spectroscopy-based thrust sensor for high-speed gaseous flows

    NASA Technical Reports Server (NTRS)

    Hanson, Ronald K. (Inventor)

    1993-01-01

    A system and method for non-intrusively obtaining the thrust value of combustion by-products of a jet engine is disclosed herein. The system includes laser elements for inducing absorption for use in determining the axial velocity and density of the jet flow stream and elements for calculating the thrust value therefrom.

  8. INFLUENCE OF AQUEOUS ALUMINUM AND ORGANIC ACIDS ON MEASUREMENT OF ACID NEUTRALIZING CAPACITY IN SURFACE WATERS

    EPA Science Inventory

    Acid neutralizing capacity (ANC) is used to quantify the acid-base status of surface waters. Acidic waters have bean defined as having ANC values less than zero, and acidification is often quantified by decreases in ANC. Measured and calculated values of ANC generally agree, exce...

  9. DFT and AIM study of the protonation of nitrous acid and the pKa of nitrous acidium ion.

    PubMed

    Crugeiras, Juan; Ríos, Ana; Maskill, Howard

    2011-11-10

    The gas phase and aqueous thermochemistry, NMR chemical shifts, and the topology of chemical bonding of nitrous acid (HONO) and nitrous acidium ion (H(2)ONO(+)) have been investigated by ab initio methods using density functional theory. By the same methods, the dissociation of H(2)ONO(+) to give the nitrosonium ion (NO(+)) and water has also been investigated. We have used Becke's hybrid functional (B3LYP), and geometry optimizations were performed with the 6-311++G(d,p) basis set. In addition, highly accurate ab initio composite methods (G3 and CBS-Q) were used. Solvation energies were calculated using the conductor-like polarizable continuum model, CPCM, at the B3LYP/6-311++G(d,p) level of theory, with the UAKS cavity model. The pK(a) value of H(2)ONO(+) was calculated using two different schemes: the direct method and the proton exchange method. The calculated pK(a) values at different levels of theory range from -9.4 to -15.6, showing that H(2)ONO(+) is a strong acid (i.e., HONO is only a weak base). The equilibrium constant, K(R), for protonation of nitrous acid followed by dissociation to give NO(+) and H(2)O has also been calculated using the same methodologies. The pK(R) value calculated by the G3 and CBS-QB3 methods is in best (and satisfactory) agreement with experimental results, which allows us to narrow down the likely value of the pK(a) of H(2)ONO(+) to about -10, a value appreciably more acidic than literature values.

  10. Characterization of Heat Treated Titanium-Based Implants by Nondestructive Eddy Current and Ultrasonic Tests

    NASA Astrophysics Data System (ADS)

    Mutlu, Ilven; Ekinci, Sinasi; Oktay, Enver

    2014-06-01

    This study presents nondestructive characterization of microstructure and mechanical properties of heat treated Ti, Ti-Cu, and Ti-6Al-4V titanium-based alloys and 17-4 PH stainless steel alloy for biomedical implant applications. Ti, Ti-Cu, and 17-4 PH stainless steel based implants were produced by powder metallurgy. Ti-6Al-4V alloy was investigated as bulk wrought specimens. Effects of sintering temperature, aging, and grain size on mechanical properties were investigated by nondestructive and destructive tests comparatively. Ultrasonic velocity in specimens was measured by using pulse-echo and transmission methods. Electrical conductivity of specimens was determined by eddy current tests. Determination of Young's modulus and strength is important in biomedical implants. Young's modulus of specimens was calculated by using ultrasonic velocities. Calculated Young's modulus values were compared and correlated with experimental values.

  11. Earthquake hazard analysis for the different regions in and around Aǧrı

    NASA Astrophysics Data System (ADS)

    Bayrak, Erdem; Yilmaz, Şeyda; Bayrak, Yusuf

    2016-04-01

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg-Richter magnitude-frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship, from the maximum likelihood method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.

  12. Validation of a program for supercritical power plant calculations

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian

    2011-12-01

    This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.

  13. A prognostic classifier for patients with colorectal cancer liver metastasis, based on AURKA, PTGS2 and MMP9.

    PubMed

    Goos, Jeroen A C M; Coupé, Veerle M H; van de Wiel, Mark A; Diosdado, Begoña; Delis-Van Diemen, Pien M; Hiemstra, Annemieke C; de Cuba, Erienne M V; Beliën, Jeroen A M; Menke-van der Houven van Oordt, C Willemien; Geldof, Albert A; Meijer, Gerrit A; Hoekstra, Otto S; Fijneman, Remond J A

    2016-01-12

    Prognosis of patients with colorectal cancer liver metastasis (CRCLM) is estimated based on clinicopathological models. Stratifying patients based on tumor biology may have additional value. Tissue micro-arrays (TMAs), containing resected CRCLM and corresponding primary tumors from a multi-institutional cohort of 507 patients, were immunohistochemically stained for 18 candidate biomarkers. Cross-validated hazard rate ratios (HRRs) for overall survival (OS) and the proportion of HRRs with opposite effect (P(HRR < 1) or P(HRR > 1)) were calculated. A classifier was constructed by classification and regression tree (CART) analysis and its prognostic value determined by permutation analysis. Correlations between protein expression in primary tumor-CRCLM pairs were calculated. Based on their putative prognostic value, EGFR (P(HRR < 1) = .02), AURKA (P(HRR < 1) = .02), VEGFA (P(HRR < 1) = .02), PTGS2 (P(HRR < 1) = .01), SLC2A1 (P(HRR > 1) < 01), HIF1α (P(HRR > 1) = .06), KCNQ1 (P(HRR > 1) = .09), CEA (P (HRR > 1) = .05) and MMP9 (P(HRR < 1) = .07) were included in the CART analysis (n = 201). The resulting classifier was based on AURKA, PTGS2 and MMP9 expression and was associated with OS (HRR 2.79, p < .001), also after multivariate analysis (HRR 3.57, p < .001). The prognostic value of the biomarker-based classifier was superior to the clinicopathological model (p = .001). Prognostic value was highest for colon cancer patients (HRR 5.71, p < .001) and patients not treated with systemic therapy (HRR 3.48, p < .01). Classification based on protein expression in primary tumors could be based on AURKA expression only (HRR 2.59, p = .04). A classifier was generated for patients with CRCLM with improved prognostic value compared to the standard clinicopathological prognostic parameters, which may aid selection of patients who may benefit from adjuvant systemic therapy.

  14. Risk assessment of trace elements in the stomach contents of Indo-Pacific Humpback Dolphins and Finless Porpoises in Hong Kong waters.

    PubMed

    Hung, Craig L H; Lau, Ridge K F; Lam, James C W; Jefferson, Thomas A; Hung, Samuel K; Lam, Michael H W; Lam, Paul K S

    2007-01-01

    The potential health risks due to inorganic substances, mainly metals, was evaluated for the two resident marine mammals in Hong Kong, the Indo-Pacific Humpback Dolphin (Sousa chinensis) and the Finless Porpoise (Neophocaena phocaenoides). The stomachs from the carcasses of twelve stranded dolphins and fifteen stranded porpoises were collected and the contents examined. Concentrations of thirteen trace elements (Ag, As, Cd, Co, Cr, Cs, Cu, Hg, Mn, Ni, Se, V and Zn) were determined by inductively coupled plasma mass spectrometer (ICP-MS). An assessment of risks of adverse effects was undertaken using two toxicity guideline values, namely the Reference Dose (RfD), commonly used in human health risk assessment, and the Toxicity Reference Value (TRV), based on terrestrial mammal data. The levels of trace metals in stomach contents of dolphins and porpoises were found to be similar. Risk quotients (RQ) calculated for the trace elements showed that risks to the dolphins and porpoises were generally low and within safe limits using the values based on the TRV, which are less conservative than those based on the RfD values. Using the RfD-based values the risks associated with arsenic, cadmium, chromium, copper, nickel and mercury were comparatively higher. The highest RQ was associated with arsenic, however, most of the arsenic in marine organisms should be in the non-toxic organic form, and thus the calculated risk is likely to be overestimated.

  15. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  16. Electronic, elastic and optical properties of divalent (R+2X) and trivalent (R+3X) rare earth monochalcogenides

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Chandra, S.; Singh, J. K.

    2017-08-01

    Based on plasma oscillations theory of solids, simple relations have been proposed for the calculation of bond length, specific gravity, homopolar energy gap, heteropolar energy gap, average energy gap, crystal ionicity, bulk modulus, electronic polarizability and dielectric constant of rare earth divalent R+2X and trivalent R+3X monochalcogenides. The specific gravity of nine R+2X, twenty R+3X, and bulk modulus of twenty R+3X monochalcogenides have been calculated for the first time. The calculated values of all parameters are compared with the available experimental and the reported values. A fairly good agreement has been obtained between them. The average percentage deviation of two parameters: bulk modulus and electronic polarizability for which experimental data are known, have also been calculated and found to be better than the earlier correlations.

  17. Passenger car equivalents of becak bermotor at road segment in Medan

    NASA Astrophysics Data System (ADS)

    Surbakti, M. S.; Sembiring, I.

    2018-02-01

    The road traffic systems, travel patterns and other traffic characteristics are different for each country due to differences in the geometric patterns, available transport facilities for commuters, proportional and type of the vehicle itself and so on. In Indonesia, the standard of pce (Passenger Car Equivalent) value found on IHCM (Indonesian Highway Capacity Manual) published in 1997. IHCM stated that the value of pce for heavy vehicles and motorcycles are 1.3 and 0.5 respectively. On these day, regarding Medan as a third biggest city in Indonesia, there have been lot of changes with regarding to the composition of the vehicle, as well as variations of the type of the vehicle itself. Becak bermotor (motorized tricycles) is a vehicle which is widely available in the city of Medan. Data from Medan City Transportation Department stated that there are more than 20,000 motorized motorized tricycles vehicles operating in the city at these day. Pce value of these rickshaws will be calculated based on observations at road and intersections in Medan. The calculation result shows that the pce value of motorized rickshaw is more than 1. This value will make the calculations regarding the performance of the traffic can be performed more accurately.

  18. Net alkalinity and net acidity 2: Practical considerations

    USGS Publications Warehouse

    Kirby, C.S.; Cravotta, C.A.

    2005-01-01

    The pH, alkalinity, and acidity of mine drainage and associated waters can be misinterpreted because of the chemical instability of samples and possible misunderstandings of standard analytical method results. Synthetic and field samples of mine drainage having various initial pH values and concentrations of dissolved metals and alkalinity were titrated by several methods, and the results were compared to alkalinity and acidity calculated based on dissolved solutes. The pH, alkalinity, and acidity were compared between fresh, unoxidized and aged, oxidized samples. Data for Pennsylvania coal mine drainage indicates that the pH of fresh samples was predominantly acidic (pH 2.5-4) or near neutral (pH 6-7); ??? 25% of the samples had pH values between 5 and 6. Following oxidation, no samples had pH values between 5 and 6. The Standard Method Alkalinity titration is constrained to yield values >0. Most calculated and measured alkalinities for samples with positive alkalinities were in close agreement. However, for low-pH samples, the calculated alkalinity can be negative due to negative contributions by dissolved metals that may oxidize and hydrolyze. The Standard Method hot peroxide treatment titration for acidity determination (Hot Acidity) accurately indicates the potential for pH to decrease to acidic values after complete degassing of CO2 and oxidation of Fe and Mn, and it indicates either the excess alkalinity or that required for neutralization of the sample. The Hot Acidity directly measures net acidity (= -net alkalinity). Samples that had near-neutral pH after oxidation had negative Hot Acidity; samples that had pH < 6.3 after oxidation had positive Hot Acidity. Samples with similar pH values before oxidation had dissimilar Hot Acidities due to variations in their alkalinities and dissolved Fe, Mn, and Al concentrations. Hot Acidity was approximately equal to net acidity calculated based on initial pH and dissolved concentrations of Fe, Mn, and Al minus the initial alkalinity. Acidity calculated from the pH and dissolved metals concentrations, assuming equivalents of 2 per mole of Fe and Mn and 3 per mole of Al, was equivalent to that calculated based on complete aqueous speciation of FeII/FeIII. Despite changes in the pH, alkalinity, and metals concentrations, the Hot Acidities were comparable for fresh and most aged samples. A meaningful "net" acidity can be determined from a measured Hot Acidity or by calculation from the pH, alkalinity, and dissolved metals concentrations. The use of net alkalinity = (Alkalinitymeasured - Hot Aciditymeasured) to design mine drainage treatment can lead to systems with insufficient Alkalinity to neutralize metal and H+ acidity and is not recommended. The use of net alkalinity = -Hot Acidity titration is recommended for the planning of mine drainage treatment. The use of net alkalinity = (Alkalinitymeasured - Aciditycalculated) is recommended with some cautions. ?? 2005 Elsevier Ltd. All rights reserved.

  19. New procedure for the determination of Hansen solubility parameters by means of inverse gas chromatography.

    PubMed

    Adamska, K; Bellinghausen, R; Voelkel, A

    2008-06-27

    The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.

  20. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  1. Bicarbonate Values for Healthy Residents Living in Cities Above 1500 Meters of Altitude: A Theoretical Model and Systematic Review.

    PubMed

    Ramirez-Sandoval, Juan C; Castilla-Peón, Maria F; Gotés-Palazuelos, José; Vázquez-García, Juan C; Wagner, Michael P; Merelo-Arias, Carlos A; Vega-Vega, Olynka; Rincón-Pedrero, Rodolfo; Correa-Rotter, Ricardo

    2016-06-01

    Ramirez-Sandoval, Juan C., Maria F. Castilla-Peón, José Gotés-Palazuelos, Juan C. Vázquez-García, Michael P. Wagner, Carlos A. Merelo-Arias, Olynka Vega-Vega, Rodolfo Rincón-Pedrero, and Ricardo Correa-Rotter. Bicarbonate values for healthy residents living in cities above 1500 m of altitude: a theoretical model and systematic review. High Alt Med Biol. 17:85-92, 2016.-Plasma bicarbonate (HCO3(-)) concentration is the main value used to assess the metabolic component of the acid-base status. There is limited information regarding plasma HCO3(-) values adjusted for altitude for people living in cities at high altitude defined as 1500 m (4921 ft) or more above sea level. Our aim was to estimate the plasma HCO3(-) concentration in residents of cities at these altitudes using a theoretical model and compare these values with HCO3(-) values found on a systematic review, and with those venous CO2 values obtained in a sample of 633 healthy individuals living at an altitude of 2240 m (7350 ft). We calculated the PCO2 using linear regression models and calculated plasma HCO3(-) according to the Henderson-Hasselbalch equation. Results show that HCO3(-) concentration falls as the altitude of the cities increase. For each 1000 m of altitude above sea level, HCO3(-) decreases to 0.55 and 1.5 mEq/L in subjects living at sea level with acute exposure to altitude and in subjects acclimatized to altitude, respectively. Estimated HCO3(-) values from the theoretical model were not different to HCO3(-) values found in publications of a systematic review or with venous total CO2 measurements in our sample. Altitude has to be taken into consideration in the calculation of HCO3(-) concentrations in cities above 1500 m to avoid an overdiagnosis of acid-base disorders in a given individual.

  2. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  3. Polarization-sensitive optical coherence tomography using continuous polarization modulation with arbitrary phase modulation amplitude

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.

    2012-03-01

    We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.

  4. Slow light generation in single-mode rectangular core photonic crystal fiber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Sandeep; Saini, Than Singh; Kumar, Ajeet, E-mail: ajeetdph@gmail.com

    2016-05-06

    In this paper, we have designed and analyzed a rectangular core photonic crystal fiber (PCF) in Tellurite material. For the designed photonics crystal fiber, we have calculated the values of confinement loss and effective mode area for different values of air filling fraction (d/Λ). For single mode operation of the designed photonic crystal fiber, we have taken d/Λ= 0.4 for the further calculation of stimulated Brillouin scattering based time delay. A maximum time delay of 158 ns has been achieved for input pump power of 39 mW. We feel the detailed theoretical investigations and simulations carried out in the study have themore » potential impact on the design and development of slow light-based photonic devices.« less

  5. Liquid-liquid equilibria for the ternary systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.; Kim, H.

    1995-03-01

    Sulfolane is widely used as a solvent for the extraction of aromatic hydrocarbons. Ternary phase equilibrium data are essential for the proper understanding of the solvent extraction process. Liquid-liquid equilibrium data for the systems sulfolane + octane + benzene, sulfolane + octane + toluene and sulfolane + octane + p-xylene were determined at 298.15, 308.15, and 318.15 K. Tie line data were satisfactorily correlated by the Othmer and Tobias method. The experimental data were compared with the values calculated by the UNIQUAC and NRTL models. Good quantitative agreement was obtained with these models. However, the calculated values based on themore » NRTL model were found to be better than those based on the UNIQUAC model.« less

  6. QSPR models for various physical properties of carbohydrates based on molecular mechanics and quantum chemical calculations.

    PubMed

    Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk

    2004-01-22

    Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.

  7. Establishing the cut off values of androgen markers in the assessment of polycystic ovarian syndrome.

    PubMed

    Nadaraja, R N D; Sthaneshwar, P; Razali, N

    2018-04-01

    Hyperandrogenism remains as one of the key features in Polycystic Ovarian Syndrome (PCOS) and can be assessed clinically or determined by biochemical assays. Hirsutism is the most common clinical manifestation of hyperandrogenism. The clinical assessment is subjected to wide variability due to poor interobserver agreement and multiple population factors such as ethnic variation, cosmetic procedures and genetic trait. The difficulty in resolving the androgen excess biochemically is due to a lack of consensus as to which serum androgen should be measured for the diagnosis of PCOS. The aim of the study was to compare and establish the diagnostic cut off value for different androgen biomarker for the diagnosis of PCOS. A total of 312 patients classified to PCOS (n = 164) and non PCOS (n = 148) cohorts were selected from the Laboratory Information System (LIS) based on serum total testosterone (TT) and sex hormone binding globulin (SHBG) from the period of 1st April 2015 to 31st March 2016. PCOS was diagnosed based on Rotterdam criteria. Clinical hyperandrogenism and ultrasound polycystic ovarian morphology were obtained from the clinical records. The other relevant biochemical results such as serum luteinizing hormone (LH), follicle stimulating hormone (FSH) and albumin were also obtained from LIS. Free androgen index (FAI), calculated free testosterone (cFT) and calculated bioavailable testosterone (cBT) were calculated for these patients. Receiver Operating Characteristic (ROC) curve analysis were performed for serum TT, SHBG, FAI, cFT, cBT and LH: FSH ratio to determine the best marker to diagnose PCOS. All the androgen parameters (except SHBG) were significantly higher in PCOS patients than in control (p<0.0001). The highest area under curve (AUC) curve was found for cBT followed by cFT and FAI. TT and LH: FSH ratio recorded a lower AUC and the lowest AUC was seen for SHBG. cBT at a cut off value of 0.86 nmol/L had the highest specificity, 83% and positive likelihood ratio (LR) at 3.79. This is followed by FAI at a cut off value of 7.1% with specificity at 82% and cFT at a cut off value of 0.8 pmol/L with specificity at 80%. All three calculated androgen indices (FAI, cFT and cBT) showed good correlation with each other. Furthermore, cFT, FAI and calculated BT were shown to be more specific with higher positive likelihood ratio than measured androgen markers. Based on our study, the calculated testosterone indices such as FAI, cBT and cFT are useful markers to distinguish PCOS from non-PCOS. Owing to ease of calculation, FAI can be incorporated in LIS and can be reported with TT and SHBG. This will be helpful for clinician to diagnose hyperandrogenism in PCOS.

  8. Turbo FRMAC 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John; Gallagher, Linda K.; Whitener, Dustin

    The Turbo FRMAC (TF) software automates the calculations described in volumes 1-3 of "The Federal Manual for Assessing Environmental Data During a Radiological Emergency" (2010 version). This software automates the process of assessing radiological data during a Federal Radiological Emergency. The manual upon which the software is based is unclassified and freely available on the Internet. TF takes values generated by field samples or computer dispersion models and assesses the data in a way which is meaningful to a decision maker at a radiological emergency; such as, do radiation values exceed city, state, or federal limits; should the crops bemore » destroyed or can they be utilized; do residents need to be evacuated, sheltered in place, or should another action taken. The software also uses formulas generated by the EPA, FDA, and other federal agencies to generate field observable values specific to the radiological event that can be used to determine where regulatory limit values are exceeded. In addition to these calculations, TF calculates values which indicate how long an emergency worker can work in the contaminated area during a radiological emergency, the dose received from drinking contaminated water or milk, the dose from eating contaminated food, the does expected down or upwind of a given field sample, along with a significant number of other similar radiological health values.« less

  9. Photolysis Rate Coefficient Calculations in Support of SOLVE II

    NASA Technical Reports Server (NTRS)

    Swartz, William H.

    2005-01-01

    A quantitative understanding of photolysis rate coefficients (or "j-values") is essential to determining the photochemical reaction rates that define ozone loss and other crucial processes in the atmosphere. j-Values can be calculated with radiative transfer models, derived from actinic flux observations, or inferred from trace gas measurements. The primary objective of the present effort was the accurate calculation of j-values in the Arctic twilight along NASA DC-8 flight tracks during the second SAGE III Ozone Loss and Validation Experiment (SOLVE II), based in Kiruna, Sweden (68 degrees N, 20 degrees E) during January-February 2003. The JHU/APL radiative transfer model was utilized to produce a large suite of j-values for photolysis processes (over 70 reactions) relevant to the upper troposphere and lower stratosphere. The calculations take into account the actual changes in ozone abundance and apparent albedo of clouds and the Earth surface along the aircraft flight tracks as observed by in situ and remote sensing platforms (e.g., EP-TOMS). A secondary objective was to analyze solar irradiance data from NCAR s Direct beam Irradiance Atmospheric Spectrometer (DIAS) on-board the NASA DC-8 and to start the development of a flexible, multi-species spectral fitting technique for the independent retrieval of O3,O2.02, and aerosol optical properties.

  10. Reference values of thirty-one frequently used laboratory markers for 75-year-old males and females

    PubMed Central

    Ryden, Ingvar; Lind, Lars

    2012-01-01

    Background We have previously reported reference values for common clinical chemistry tests in healthy 70-year-old males and females. We have now repeated this study 5 years later to establish reference values also at the age of 75. It is important to have adequate reference values for elderly patients as biological markers may change over time, and adequate reference values are essential for correct clinical decisions. Methods We have investigated 31 frequently used laboratory markers in 75-year-old males (n = 354) and females (n = 373) without diabetes. The 2.5 and 97.5 percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry. Results Reference values are reported for 75-year-old males and females for 31 frequently used laboratory markers. Conclusion There were minor differences between reference intervals calculated with and without individuals with cardiovascular diseases. Several of the reference intervals differed from Scandinavian reference intervals based on younger individuals (Nordic Reference Interval Project). PMID:22300333

  11. Development of a web-based CT dose calculator: WAZA-ARI.

    PubMed

    Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M

    2011-09-01

    A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.

  12. Global Mapping of Provisioning Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Bingham, Lisa; Straatsma, Menno; Karssenberg, Derek

    2016-04-01

    Attributing monetary value to ecosystem services for decision-making has become more relevant as a basis for decision-making. There are a number of problematic aspects of the calculations, including consistency of economy represented (e.g., purchasing price, production price) and determining which ecosystem subservices to include in a valuation. While several authors have proposed methods for calculating ecosystem services and calculations are presented for global and regional studies, the calculations are mostly broken down into biomes and regions without showing spatially explicit results. The key to decision-making for governments is to be able to make spatial-based decisions because a large spatial variation may exist within a biome or region. Our objective was to compute the spatial distribution of global ecosystem services based on 89 subservices. Initially, only the provisioning ecosystem service category is presented. The provisioning ecosystem service category was calculated using 6 ecosystem services (food, water, raw materials, genetic resources, medical resources, and ornaments) divided into 41 subservices. Global data sets were obtained from a variety of governmental and research agencies for the year 2005 because this is the most data complete and recent year available. All data originated either in tabular or grid formats and were disaggregated to 10 km cell length grids. A lookup table with production values by subservice by country were disaggregated over the economic zone (either marine, land, or combination) based on the spatial existence of the subservice (e.g. forest cover, crop land, non-arable land). Values express the production price in international dollars per hectare. The ecosystem services and the ecosystem service category(ies) maps may be used to show spatial variation of a service within and between countries as well as to specifically show the values within specific regions (e.g. countries, continents), biomes (e.g. coastal, forest), or hazardous regions (e.g. landslides, flood plains, war zones). A preliminary example of the provisioning ecosystem service category illustrates the valuation of deltaic regions and a second example illustrates the valuation of the subservice category of food production prices in flood zones. Future work of this research will spatially represent the calculations of the remaining three ecosystem service categories (regulating, habitat, cultural) and investigate the propagation of uncertainty of the input data to ecosystem service maps.

  13. WE-DE-201-11: Sensitivity and Specificity of Verification Methods Based On Total Reference Air Kerma (TRAK) Or On User Provided Dose Points for Graphically Planned Skin HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Devlin, P; Bhagwat, M

    Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less

  14. A practical approach for calculating the settlement and storage capacity of landfills based on the space and time discretization of the landfilling process.

    PubMed

    Gao, Wu; Xu, Wenjie; Bian, Xuecheng; Chen, Yunmin

    2017-11-01

    The settlement of any position of the municipal solid waste (MSW) body during the landfilling process and after its closure has effects on the integrity of the internal structure and storage capacity of the landfill. This paper proposes a practical approach for calculating the settlement and storage capacity of landfills based on the space and time discretization of the landfilling process. The MSW body in the landfill was divided into independent column units, and the filling process of each column unit was determined by a simplified complete landfilling process. The settlement of a position in the landfill was calculated with the compression of each MSW layer in every column unit. Then, the simultaneous settlement of all the column units was integrated to obtain the settlement of the landfill and storage capacity of all the column units; this allowed to obtain the storage capacity of the landfill based on the layer-wise summation method. When the compression of each MSW layer was calculated, the effects of the fluctuation of the main leachate level and variation in the unit weight of the MSW on the overburdened effective stress were taken into consideration by introducing the main leachate level's proportion and the unit weight and buried depth curve. This approach is especially significant for MSW with a high kitchen waste content and landfills in developing countries. The stress-biodegradation compression model was used to calculate the compression of each MSW layer. A software program, Settlement and Storage Capacity Calculation System for Landfills, was developed by integrating the space and time discretization of the landfilling process and the settlement and storage capacity algorithms. The landfilling process of the phase IV of Shanghai Laogang Landfill was simulated using this software. The maximum geometric volume of the landfill error between the calculated and measured values is only 2.02%, and the accumulated filling weight error between the calculated value and measured value is less than 5%. These results show that this approach is practical for satisfactorily and reliably calculating the settlement and storage capacity. In addition, the development of the elevation lines in the landfill sections created with the software demonstrates that the optimization of the design of the structures should be based on the settlement of the landfill. Since this practical approach can reasonably calculate the storage capacity of landfills and efficiently provide the development of the settlement of each landfilling stage, it can be used for the optimizations of landfilling schemes and structural designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  16. 21 CFR 868.1890 - Predictive pulmonary-function value calculator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Predictive pulmonary-function value calculator. 868.1890 Section 868.1890 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... pulmonary-function value calculator. (a) Identification. A predictive pulmonary-function value calculator is...

  17. 21 CFR 868.1890 - Predictive pulmonary-function value calculator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Predictive pulmonary-function value calculator. 868.1890 Section 868.1890 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... pulmonary-function value calculator. (a) Identification. A predictive pulmonary-function value calculator is...

  18. Methods and systems for detecting abnormal digital traffic

    DOEpatents

    Goranson, Craig A [Kennewick, WA; Burnette, John R [Kennewick, WA

    2011-03-22

    Aspects of the present invention encompass methods and systems for detecting abnormal digital traffic by assigning characterizations of network behaviors according to knowledge nodes and calculating a confidence value based on the characterizations from at least one knowledge node and on weighting factors associated with the knowledge nodes. The knowledge nodes include a characterization model based on prior network information. At least one of the knowledge nodes should not be based on fixed thresholds or signatures. The confidence value includes a quantification of the degree of confidence that the network behaviors constitute abnormal network traffic.

  19. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  20. SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge

    USGS Publications Warehouse

    Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.

    2010-01-01

    A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.

  1. The Hyperfine Structure of the Ground State in the Muonic Helium Atoms

    NASA Astrophysics Data System (ADS)

    Aznabayev, D. T.; Bekbaev, A. K.; Korobov, V. I.

    2018-05-01

    Non-relativistic ionization energies 3He2+μ-e- and 4He2+μ-e- of helium-muonic atoms are calculated for ground states. The calculations are based on the variational method of the exponential expansion. Convergence of the variational energies is studied by an increasing of a number of the basis functions N. This allows to claim that the obtained energy values have 26 significant digits for ground states. With the obtained results we calculate hyperfine splitting of the muonic helium atoms.

  2. An Ab Initio Study of CuCO

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.

    1994-01-01

    Modified coupled-pair functional (MCPF) calculations and coupled cluster singles and doubles calculations, which include a perturbational estimate of the connected triples [CCSD(T)], yield a bent structure for CuCO, thus, supporting the prediction of a nonlinear structure based on density functional (DF) calculations. Our best estimate for the binding energy is 4.9 +/- 1.4 kcal/mol; this is in better agreement with experiment (6.0 +/- 1.2 kcal/mol) than the DF approach which yields a value (19.6 kcal/mol) significantly larger than experiment.

  3. Quantitative evaluation of benign and malignant vertebral fractures with diffusion-weighted MRI: what is the optimum combination of b values for ADC-based lesion differentiation with the single-shot turbo spin-echo sequence?

    PubMed

    Geith, Tobias; Schmidt, Gerwin; Biffar, Andreas; Dietrich, Olaf; Duerr, Hans Roland; Reiser, Maximilian; Baur-Melnyk, Andrea

    2014-09-01

    The purpose of our study was to determine the optimum combination of b values for calculating the apparent diffusion coefficient (ADC) using a diffusion-weighted (DW) single-shot turbo spin-echo (TSE) sequence in the differentiation between acute benign and malignant vertebral body fractures. Twenty-six patients with osteoporotic (mean age, 69 years; range, 31.5-86.2 years) and 20 patients with malignant vertebral fractures (mean age, 63.4 years; range, 24.7-86.4 years) were studied. T1-weighted, STIR, and T2-weighted sequences were acquired at 1.5 T. A DW single-shot TSE sequence at different b values (100, 250, 400, and 600 s/mm(2)) was applied. On the DW images for each evaluated fracture, an ROI was manually adapted to the area of hyperintense signal intensity on STIR-hypointense signal on T1-weighted images. For each ROI, nine different combinations of two, three, and four b values were used to calculate the ADC using a least-squares algorithm. The Student t test and Mann-Whitney U test were used to determine significant differences between benign and malignant fractures. An ROC analysis and the Youden index were used to determine cutoff values for assessment of the highest sensitivity and specificity for the different ADC values. The positive (PPV) and negative predictive values (NPV) were also determined. All calculated ADCs (except the combination of b = 400 s/mm(2) and b = 600 s/mm(2)) showed statistically significant differences between benign and malignant vertebral body fractures, with benign fractures having higher ADCs than malignant ones. The use of higher b values resulted in lower ADCs than those calculated with low b values. The highest AUC (0.85) showed the ADCs calculated with b = 100 and 400 s/mm(2), and the second highest AUC (0.829) showed the ADCs calculated with b = 100, 250, and 400 s/mm(2). The Youden index with equal weight given to sensitivity and specificity suggests use of an ADC calculated with b = 100, 250, and 400 s/mm(2) (cutoff ADC, < 1.7 × 10(-3) mm(2)/s) to best diagnose malignancy (sensitivity, 85%; specificity, 84.6%; PPV, 81.0%; NPV, 88.0%). ADCs calculated with a combination of low to intermediate b values (b = 100, 250, and 400 s/mm(2)) provide the best diagnostic performance of a DW single-shot TSE sequence to differentiate acute benign and malignant vertebral body fractures.

  4. Calculation and measurement of a neutral air flow velocity impacting a high voltage capacitor with asymmetrical electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malík, M., E-mail: michal.malik@tul.cz; Primas, J.; Kopecký, V.

    2014-01-15

    This paper deals with the effects surrounding phenomenon of a mechanical force generated on a high voltage asymmetrical capacitor (the so called Biefeld-Brown effect). A method to measure this force is described and a formula to calculate its value is also given. Based on this the authors derive a formula characterising the neutral air flow velocity impacting an asymmetrical capacitor connected to high voltage. This air flow under normal circumstances lessens the generated force. In the following part this velocity is measured using Particle Image Velocimetry measuring technique and the results of the theoretically calculated velocity and the experimentally measuredmore » value are compared. The authors found a good agreement between the results of both approaches.« less

  5. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  6. Filter Tuning Using the Chi-Squared Statistic

    NASA Technical Reports Server (NTRS)

    Lilly-Salkowski, Tyler B.

    2017-01-01

    This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.

  7. Development and validation of effective real-time and periodic interinstrument comparison method for automatic hematology analyzers.

    PubMed

    Park, Sang Hyuk; Park, Chan-Jeoung; Kim, Mi-Jeong; Choi, Mi-Ok; Han, Min-Young; Cho, Young-Uk; Jang, Seongsoo

    2014-12-01

    We developed and validated an interinstrument comparison method for automatic hematology analyzers based on the 99th percentile coefficient of variation (CV) cutoff of daily means and validated in both patient samples and quality control (QC) materials. A total of 120 patient samples were obtained over 6 months. Data from the first 3 months were used to determine 99th percentile CV cutoff values, and data obtained in the last 3 months were used to calculate acceptable ranges and rejection rates. Identical analyses were also performed using QC materials. Two instrument comparisons were also performed, and the most appropriate allowable total error (ATE) values were determined. The rejection rates based on the 99th percentile cutoff values were within 10.00% and 9.30% for the patient samples and QC materials, respectively. The acceptable ranges of QC materials based on the currently used method were wider than those calculated from the 99th percentile CV cutoff values in most items. In two-instrument comparisons, 34.8% of all comparisons failed, and 87.0% of failed comparisons were successful when 4 SD was applied as an ATE value instead of 3 SD. The 99th percentile CV cutoff value-derived daily acceptable ranges can be used as a real-time interinstrument comparison method in both patient samples and QC materials. Applying 4 SD as an ATE value can significantly reduce unnecessarily followed recalibration in the leukocyte differential counts, reticulocytes, and mean corpuscular volume. Copyright© by the American Society for Clinical Pathology.

  8. A generally applicable lightweight method for calculating a value structure for tools and services in bioinformatics infrastructure projects.

    PubMed

    Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin

    2017-10-30

    Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.

  9. Thermodynamic properties by Equation of state of liquid sodium under pressure

    NASA Astrophysics Data System (ADS)

    Li, Huaming; Sun, Yongli; Zhang, Xiaoxiao; Li, Mo

    Isothermal bulk modulus, molar volume and speed of sound of molten sodium are calculated through an equation of state of a power law form within good precision as compared with the experimental data. The calculated internal energy data show the minimum along the isothermal lines as the previous result but with slightly larger values. The calculated values of isobaric heat capacity show the unexpected minimum in the isothermal compression. The temperature and pressure derivative of various thermodynamic quantities in liquid Sodium are derived. It is discussed about the contribution from entropy to the temperature and pressure derivative of isothermal bulk modulus. The expressions for acoustical parameter and nonlinearity parameter are obtained based on thermodynamic relations from the equation of state. Both parameters for liquid Sodium are calculated under high pressure along the isothermal lines by using the available thermodynamic data and numeric derivations. By comparison with the results from experimental measurements and quasi-thermodynamic theory, the calculated values are found to be very close at melting point at ambient condition. Furthermore, several other thermodynamic quantities are also presented. Scientific Research Starting Foundation from Taiyuan university of Technology, Shanxi Provincial government (``100-talents program''), China Scholarship Council and National Natural Science Foundation of China (NSFC) under Grant No. 11204200.

  10. TrackEtching - A Java based code for etched track profile calculations in SSNTDs

    NASA Astrophysics Data System (ADS)

    Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.

    2017-09-01

    A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.

  11. Nutritive value of mule deer forages on ponderosa pine summer range in Arizona

    Treesearch

    P. J. Urness; D. J. Neff; R. K. Watkins

    1975-01-01

    Chemical analyses and apparent in vitro dry matter digestibilities were obtained for mule deer (Odocoileus hemionus) forages appearing in monthly diets. Relative values among individual forage species were calculated based upon nutrient contents and percentage composition in the diet. These data provide land managers with the means to more precisely assess some impacts...

  12. Design of a Sixteen Bit Pipelined Adder Using CMOS Bulk P-Well Technology.

    DTIC Science & Technology

    1984-12-01

    node’s current value. These rules are based on the assumption that the event that was last calculated reflects the latest configuraticn of the network...Lines beginning with - are treated as ll comment. The parameter names and their default values are: ;configuration file for ’standard’ MPC procem capm .2a

  13. A battery power model for the EUVE spacecraft

    NASA Technical Reports Server (NTRS)

    Yen, Wen L.; Littlefield, Ronald G.; Mclean, David R.; Tuchman, Alan; Broseghini, Todd A.; Page, Brenda J.

    1993-01-01

    This paper describes a battery power model that has been developed to simulate and predict the behavior of the 50 ampere-hour nickel-cadmium battery that supports the Extreme Ultraviolet Explorer (EUVE) spacecraft in its low Earth orbit. First, for given orbit, attitude, solar array panel and spacecraft load data, the model calculates minute-by-minute values for the net power available for charging the battery for a user-specified time period (usually about two weeks). Next, the model is used to calculate minute-by-minute values for the battery voltage, current and state-of-charge for the time period. The model's calculations are explained for its three phases: sunrise charging phase, constant voltage phase, and discharge phase. A comparison of predicted model values for voltage, current and state-of-charge with telemetry data for a complete charge-discharge cycle shows good correlation. This C-based computer model will be used by the EUVE Flight Operations Team for various 'what-if' scheduling analyses.

  14. A Modified Formula of the First-order Approximation for Assessing the Contribution of Climate Change to Runoff Based on the Budyko Hypothesis

    NASA Astrophysics Data System (ADS)

    Liu, W.; Ning, T.; Han, X.

    2015-12-01

    The climate elasticity based on the Budyko curves has been widely used to evaluate the hydrological responses to climate change. The Mezentsev-Choudhury-Yang formula is one of the representative analytical equations for Budyko curves. Previous researches mostly used the variation of runoff (R) caused by the changes of annual precipitation (P) and potential evapotranspiration (ET0) as the hydrological response to climate change and evaluated it by a first-order approximation in a form of total differential, the major components of which include the partial derivatives of R to P and ET0, as well as climate elasticity on this basis. Based on analytic derivation and the characteristics of Budyko curves, this study proposed a modified formula of the first-order approximation to reduce the errors from the approximation. In the calculation of partial derivatives and climate elasticity, the values of P and ET0 were taken to the sum of their base values and half increments, respectively. The calculation was applied in 33 catchments of the Hai River basin in China and the results showed that the mean absolute value of relative error of approximated runoff change decreased from 8.4% to 0.4% and the maximum value, from 23.4% to 1.3%. Given the variation values of P, ET0 and the controlling parameter (n), the modified formula can exactly quantify the contributions of climate fluctuation and underlying surface change to runoff. Taking the Murray-Darling basin in Australia as an example of the contribution calculated by the modified formula, the reductions of mean annual runoff caused by changes of P, ET0 and n from 1895-1996 to 1997-2006 were 2.6, 0.6 and 2.9 mm, respectively, and the sum of them was 6.1 mm, which was completely consistent with the observed runoff. The modified formula of the first-order approximation proposed in this study can be not only used to assess the contributions of climate change to the runoff, but also widely used to analyze the effects of similar issues based on a certain functional relationship in hydrological and climate changes.

  15. An organ-based approach to dose calculation in the assessment of dose-dependent biological effects of ionising radiation in Arabidopsis thaliana.

    PubMed

    Biermans, Geert; Horemans, Nele; Vanhoudt, Nathalie; Vandenhove, Hildegarde; Saenen, Eline; Van Hees, May; Wannijn, Jean; Vives i Batlle, Jordi; Cuypers, Ann

    2014-07-01

    There is a need for a better understanding of biological effects of radiation exposure in non-human biota. Correct description of these effects requires a more detailed model of dosimetry than that available in current risk assessment tools, particularly for plants. In this paper, we propose a simple model for dose calculations in roots and shoots of Arabidopsis thaliana seedlings exposed to radionuclides in a hydroponic exposure setup. This model is used to compare absorbed doses for three radionuclides, (241)Am (α-radiation), (90)Sr (β-radiation) and (133)Ba (γ radiation). Using established dosimetric calculation methods, dose conversion coefficient values were determined for each organ separately based on uptake data from the different plant organs. These calculations were then compared to the DCC values obtained with the ERICA tool under equivalent geometry assumptions. When comparing with our new method, the ERICA tool appears to overestimate internal doses and underestimate external doses in the roots for all three radionuclides, though each to a different extent. These observations might help to refine dose-response relationships. The DCC values for (90)Sr in roots are shown to deviate the most. A dose-effect curve for (90)Sr β-radiation has been established on biomass and photosynthesis endpoints, but no significant dose-dependent effects are observed. This indicates the need for use of endpoints at the molecular and physiological scale. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Correlation dimension of financial market

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao

    2017-05-01

    In this paper, correlation dimension is applied to financial data analysis. We calculate the correlation dimensions of some real market data and find that the dimensions are significantly smaller than those of the simulation data based on geometric Brownian motion. Based on the analysis of the Chinese and US stock market data, the main results are as follows. First, by calculating three data sets for the Chinese and US market, we find that large market volatility leads to a significant decrease in the dimensions. Second, based on 5-min stock price data, we find that the Chinese market dimension is significantly larger than the US market; this shows a significant difference between the two markets for high frequency data. Third, we randomly extract stocks from a stock set and calculate the correlation dimensions, and find that the average value of these dimensions is close to the dimension of the original set. In addition, we analyse the intuitional meaning of the relevant dimensions used in this paper, which are directly related to the average degree of the financial threshold network. The dimension measures the speed of the average degree that varies with the threshold value. A smaller dimension means that the rate of change is slower.

  17. Nanomechanical properties of phospholipid microbubbles.

    PubMed

    Buchner Santos, Evelyn; Morris, Julia K; Glynos, Emmanouil; Sboros, Vassilis; Koutsos, Vasileios

    2012-04-03

    This study uses atomic force microscopy (AFM) force-deformation (F-Δ) curves to investigate for the first time the Young's modulus of a phospholipid microbubble (MB) ultrasound contrast agent. The stiffness of the MBs was calculated from the gradient of the F-Δ curves, and the Young's modulus of the MB shell was calculated by employing two different mechanical models based on the Reissner and elastic membrane theories. We found that the relatively soft phospholipid-based MBs behave inherently differently to stiffer, polymer-based MBs [Glynos, E.; Koutsos, V.; McDicken, W. N.; Moran, C. M.; Pye, S. D.; Ross, J. A.; Sboros, V. Langmuir2009, 25 (13), 7514-7522] and that elastic membrane theory is the most appropriate of the models tested for evaluating the Young's modulus of the phospholipid shell, agreeing with values available for living cell membranes, supported lipid bilayers, and synthetic phospholipid vesicles. Furthermore, we show that AFM F-Δ curves in combination with a suitable mechanical model can assess the shell properties of phospholipid MBs. The "effective" Young's modulus of the whole bubble was also calculated by analysis using Hertz theory. This analysis yielded values which are in agreement with results from studies which used Hertz theory to analyze similar systems such as cells.

  18. Determining Risk of Falls in Community Dwelling Older Adults: A Systematic Review and Meta-analysis Using Posttest Probability

    PubMed Central

    Fritz, Stacy; Middleton, Addie; Allison, Leslie; Wingood, Mariana; Phillips, Emma; Criss, Michelle; Verma, Sangita; Osborne, Jackie; Chui, Kevin K.

    2017-01-01

    Background: Falls and their consequences are significant concerns for older adults, caregivers, and health care providers. Identification of fall risk is crucial for appropriate referral to preventive interventions. Falls are multifactorial; no single measure is an accurate diagnostic tool. There is limited information on which history question, self-report measure, or performance-based measure, or combination of measures, best predicts future falls. Purpose: First, to evaluate the predictive ability of history questions, self-report measures, and performance-based measures for assessing fall risk of community-dwelling older adults by calculating and comparing posttest probability (PoTP) values for individual test/measures. Second, to evaluate usefulness of cumulative PoTP for measures in combination. Data Sources: To be included, a study must have used fall status as an outcome or classification variable, have a sample size of at least 30 ambulatory community-living older adults (≥65 years), and track falls occurrence for a minimum of 6 months. Studies in acute or long-term care settings, as well as those including participants with significant cognitive or neuromuscular conditions related to increased fall risk, were excluded. Searches of Medline/PubMED and Cumulative Index of Nursing and Allied Health (CINAHL) from January 1990 through September 2013 identified 2294 abstracts concerned with fall risk assessment in community-dwelling older adults. Study Selection: Because the number of prospective studies of fall risk assessment was limited, retrospective studies that classified participants (faller/nonfallers) were also included. Ninety-five full-text articles met inclusion criteria; 59 contained necessary data for calculation of PoTP. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) was used to assess each study's methodological quality. Data Extraction: Study design and QUADAS score determined the level of evidence. Data for calculation of sensitivity (Sn), specificity (Sp), likelihood ratios (LR), and PoTP values were available for 21 of 46 measures used as search terms. An additional 73 history questions, self-report measures, and performance-based measures were used in included articles; PoTP values could be calculated for 35. Data Synthesis: Evidence tables including PoTP values were constructed for 15 history questions, 15 self-report measures, and 26 performance-based measures. Recommendations for clinical practice were based on consensus. Limitations: Variations in study quality, procedures, and statistical analyses challenged data extraction, interpretation, and synthesis. There was insufficient data for calculation of PoTP values for 63 of 119 tests. Conclusions: No single test/measure demonstrated strong PoTP values. Five history questions, 2 self-report measures, and 5 performance-based measures may have clinical usefulness in assessing risk of falling on the basis of cumulative PoTP. Berg Balance Scale score (≤50 points), Timed Up and Go times (≥12 seconds), and 5 times sit-to-stand times (≥12) seconds are currently the most evidence-supported functional measures to determine individual risk of future falls. Shortfalls identified during review will direct researchers to address knowledge gaps. PMID:27537070

  19. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  20. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  1. Feasibility of MR-only proton dose calculations for prostate cancer radiotherapy using a commercial pseudo-CT generation method

    NASA Astrophysics Data System (ADS)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Landry, Guillaume; Belka, Claus; Parodi, Katia; Seevinck, Peter R.; Raaymakers, Bas W.; Kurz, Christopher

    2017-12-01

    A magnetic resonance (MR)-only radiotherapy workflow can reduce cost, radiation exposure and uncertainties introduced by CT-MRI registration. A crucial prerequisite is generating the so called pseudo-CT (pCT) images for accurate dose calculation and planning. Many pCT generation methods have been proposed in the scope of photon radiotherapy. This work aims at verifying for the first time whether a commercially available photon-oriented pCT generation method can be employed for accurate intensity-modulated proton therapy (IMPT) dose calculation. A retrospective study was conducted on ten prostate cancer patients. For pCT generation from MR images, a commercial solution for creating bulk-assigned pCTs, called MR for Attenuation Correction (MRCAT), was employed. The assigned pseudo-Hounsfield Unit (HU) values were adapted to yield an increased agreement to the reference CT in terms of proton range. Internal air cavities were copied from the CT to minimise inter-scan differences. CT- and MRCAT-based dose calculations for opposing beam IMPT plans were compared by gamma analysis and evaluation of clinically relevant target and organ at risk dose volume histogram (DVH) parameters. The proton range in beam’s eye view (BEV) was compared using single field uniform dose (SFUD) plans. On average, a (2%, 2 mm) gamma pass rate of 98.4% was obtained using a 10% dose threshold after adaptation of the pseudo-HU values. Mean differences between CT- and MRCAT-based dose in the DVH parameters were below 1 Gy (<1.5% ). The median proton range difference was 0.1 mm, with on average 96% of all BEV dose profiles showing a range agreement better than 3 mm. Results suggest that accurate MR-based proton dose calculation using an automatic commercial bulk-assignment pCT generation method, originally designed for photon radiotherapy, is feasible following adaptation of the assigned pseudo-HU values.

  2. 40 CFR 600.002-85 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... automobiles. (10) “Fuel Economy” means (i) the average number of miles traveled by an automobile or group of... equivalent petroleum-based fuel economy for an electrically powered automobile as determined by the Secretary..., the term means the equivalent petroleum-based fuel economy value as determined by the calculation...

  3. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less

  4. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    NASA Astrophysics Data System (ADS)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  5. 10 CFR 431.304 - Uniform test method for the measurement of energy consumption of walk-in coolers and walk-in...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-value of panels until January 1, 2015. (1) The R value shall be the 1/K factor multiplied by the thickness of the panel. (2) The K factor shall be based on ASTM C518 (incorporated by reference, see § 431.303). (3) For calculating the R value for freezers, the K factor of the foam at 20 degrees Fahrenheit...

  6. 10 CFR 431.304 - Uniform test method for the measurement of energy consumption of walk-in coolers and walk-in...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...-value of panels until January 1, 2015. (1) The R value shall be the 1/K factor multiplied by the thickness of the panel. (2) The K factor shall be based on ASTM C518 (incorporated by reference, see § 431.303). (3) For calculating the R value for freezers, the K factor of the foam at 20 degrees Fahrenheit...

  7. 10 CFR 431.304 - Uniform test method for the measurement of energy consumption of walk-in coolers and walk-in...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-value of panels until January 1, 2015. (1) The R value shall be the 1/K factor multiplied by the thickness of the panel. (2) The K factor shall be based on ASTM C518 (incorporated by reference, see § 431.303). (3) For calculating the R value for freezers, the K factor of the foam at 20 degrees Fahrenheit...

  8. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  9. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  10. Pricing of premiums for equity-linked life insurance based on joint mortality models

    NASA Astrophysics Data System (ADS)

    Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.

    2018-03-01

    Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result

  11. Research on volume metrology method of large vertical energy storage tank based on internal electro-optical distance-ranging method

    NASA Astrophysics Data System (ADS)

    Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang

    2018-01-01

    A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.

  12. Dynamic Magnification Factor in a Box-Shape Steel Girder

    NASA Astrophysics Data System (ADS)

    Rahbar-Ranji, A.

    2014-01-01

    The dynamic effect of moving loads on structures is treated as a dynamic magnification factor when resonant is not imminent. Studies have shown that the calculated magnification factors from field measurements could be higher than the values specified in design codes. It is the main aim of present paper to investigate the applicability and accuracy of a rule-based expression for calculation of dynamic magnification factor for lifting appliances used in marine industry. A steel box shape girder of a crane is considered and transient dynamic analysis using computer code ANSYS is implemented. Dynamic magnification factor is calculated for different loading conditions and compared with rule-based equation. The effects of lifting speeds, acceleration, damping ratio and position of cargo are examined. It is found that rule-based expression underestimate dynamic magnification factor.

  13. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    PubMed

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  14. Evaluating the effect of human activity patterns on air pollution exposure using an integrated field-based and agent-based modelling framework

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Beelen, Rob M. J.; de Bakker, Merijn P.; Karssenberg, Derek

    2015-04-01

    Constructing spatio-temporal numerical models to support risk assessment, such as assessing the exposure of humans to air pollution, often requires the integration of field-based and agent-based modelling approaches. Continuous environmental variables such as air pollution are best represented using the field-based approach which considers phenomena as continuous fields having attribute values at all locations. When calculating human exposure to such pollutants it is, however, preferable to consider the population as a set of individuals each with a particular activity pattern. This would allow to account for the spatio-temporal variation in a pollutant along the space-time paths travelled by individuals, determined, for example, by home and work locations, road network, and travel times. Modelling this activity pattern requires an agent-based or individual based modelling approach. In general, field- and agent-based models are constructed with the help of separate software tools, while both approaches should play together in an interacting way and preferably should be combined into one modelling framework, which would allow for efficient and effective implementation of models by domain specialists. To overcome this lack in integrated modelling frameworks, we aim at the development of concepts and software for an integrated field-based and agent-based modelling framework. Concepts merging field- and agent-based modelling were implemented by extending PCRaster (http://www.pcraster.eu), a field-based modelling library implemented in C++, with components for 1) representation of discrete, mobile, agents, 2) spatial networks and algorithms by integrating the NetworkX library (http://networkx.github.io), allowing therefore to calculate e.g. shortest routes or total transport costs between locations, and 3) functions for field-network interactions, allowing to assign field-based attribute values to networks (i.e. as edge weights), such as aggregated or averaged concentration values. We demonstrate the approach by using six land use regression (LUR) models developed in the ESCAPE (European Study of Cohorts for Air Pollution Effects) project. These models calculate several air pollutants (e.g. NO2, NOx, PM2.5) for the entire Netherlands at a high (5 m) resolution. Using these air pollution maps, we compare exposure of individuals calculated at their x, y location of their home, their work place, and aggregated over the close surroundings of these locations. In addition, total exposure is accumulated over daily activity patterns, summing exposure at home, at the work place, and while travelling between home and workplace, by routing individuals over the Dutch road network, using the shortest route. Finally, we illustrate how routes can be calculated with the minimum total exposure (instead of shortest distance).

  15. Bolus Guide: A Novel Insulin Bolus Dosing Decision Support Tool Based on Selection of Carbohydrate Ranges

    PubMed Central

    Shapira, Gali; Yodfat, Ofer; HaCohen, Arava; Feigin, Paul; Rubin, Richard

    2010-01-01

    Background Optimal continuous subcutaneous insulin infusion (CSII) therapy emphasizes the relationship between insulin dose and carbohydrate consumption. One widely used tool (bolus calculator) requires the user to enter discrete carbohydrate values; however, many patients might not estimate carbohydrates accurately. This study assessed carbohydrate estimation accuracy in type 1 diabetes CSII users and compared simulated blood glucose (BG) outcomes using the bolus calculator and the “bolus guide,” an alternative system based on ranges of carbohydrate load. Methods Patients (n = 60) estimated the carbohydrate load of a representative sample of meals of known carbohydrate value. The estimated error distribution [coefficient of variation (CV)] was the basis for a computer simulation (n = 1.6 million observations) of insulin recommendations for the bolus guide and bolus calculator, translated into outcome blood glucose (OBG) ranges (≤60, 61–200, >201 mg/dl). Patients (n = 30) completed questionnaires assessing satisfaction with the bolus guide. Results The CV of typical meals ranged from 27.9% to 44.5%. The percentage of simulated OBG for the calculator and the bolus guide in the <60 mg/dl range were 20.8% and 17.2%, respectively, and 13.8% and 15.8%, respectively, in the >200 mg/dl range. The mean and median scores of all bolus guide satisfaction items and ease of learning and use were 4.17 and 4.2, respectively (of 5.0). Conclusion The bolus guide recommendation based on carbohydrate range selection is substantially similar to the calculator based on carbohydrate point estimation and appears to be highly accepted by type 1 diabetes insulin pump users. PMID:20663453

  16. Effect of temperature on the acid-base properties of the alumina surface: microcalorimetry and acid-base titration experiments.

    PubMed

    Morel, Jean-Pierre; Marmier, Nicolas; Hurel, Charlotte; Morel-Desrosiers, Nicole

    2006-06-15

    Sorption reactions on natural or synthetic materials that can attenuate the migration of pollutants in the geosphere could be affected by temperature variations. Nevertheless, most of the theoretical models describing sorption reactions are at 25 degrees C. To check these models at different temperatures, experimental data such as the enthalpies of sorption are thus required. Highly sensitive microcalorimeters can now be used to determine the heat effects accompanying the sorption of radionuclides on oxide-water interfaces, but enthalpies of sorption cannot be extracted from microcalorimetric data without a clear knowledge of the thermodynamics of protonation and deprotonation of the oxide surface. However, the values reported in the literature show large discrepancies and one must conclude that, amazingly, this fundamental problem of proton binding is not yet resolved. We have thus undertaken to measure by titration microcalorimetry the heat effects accompanying proton exchange at the alumina-water interface at 25 degrees C. Based on (i) the surface sites speciation provided by a surface complexation model (built from acid-base titrations at 25 degrees C) and (ii) results of the microcalorimetric experiments, calculations have been made to extract the enthalpic variations associated respectively to first and second deprotonation of the alumina surface. Values obtained are deltaH1 = 80+/-10 kJ mol(-1) and deltaH2 = 5+/-3 kJ mol(-1). In a second step, these enthalpy values were used to calculate the alumina surface acidity constants at 50 degrees C via the van't Hoff equation. Then a theoretical titration curve at 50 degrees C was calculated and compared to the experimental alumina surface titration curve. Good agreement between the predicted acid-base titration curve and the experimental one was observed.

  17. DFT calculation of pKa’s for dimethoxypyrimidinylsalicylic based herbicides

    NASA Astrophysics Data System (ADS)

    Delgado, Eduardo J.

    2009-03-01

    Dimethoxypyrimidinylsalicylic derived compounds show potent herbicidal activity as a result of the inhibition of acetohydroxyacid synthase, the first common enzyme in the biosynthetic pathway of the branched-chain aminoacids (valine, leucine and isoleucine) in plants, bacteria and fungi. Despite its practical importance, this family of compounds have been poorly characterized from a physico-chemical point of view. Thus for instance, their pK a's have not been reported earlier neither experimentally nor theoretically. In this study, the acid-dissociation constants of 39 dimethoxypyrimidinylsalicylic derived herbicides are calculated by DFT methods at B3LYP/6-31G(d,p) level of theory. The calculated values are validated by two checking tests based on the Hammett equation.

  18. Betavoltaic battery performance: Comparison of modeling and experiment.

    PubMed

    Svintsov, A A; Krasnov, A A; Polikarpov, M A; Polyakov, A Y; Yakimov, E B

    2018-07-01

    A verification of the Monte Carlo simulation software for the prediction of short circuit current value is carried out using the Ni-63 source with the activity of 2.7 mCi/cm 2 and converters based on Si p-i-n diodes and SiC and GaN Schottky diodes. A comparison of experimentally measured and calculated short circuit current values confirms the validity of the proposed modeling method, with the difference in the measured and calculated short circuit current values not exceeding 25% and the error in the predicted output power values being below 30%. Effects of the protective layer formed on the Ni-63 radioactive film and of the passivating film on the semiconductor converters on the energy deposited inside the converters are estimated. The maximum attainable betavoltaic cell parameters are estimated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography.

    PubMed

    Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2015-01-01

    Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography.

  20. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  1. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography*

    PubMed Central

    Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2015-01-01

    Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553

  2. Cardiac Mean Electrical Axis in Thoroughbreds—Standardization by the Dubois Lead Positioning System

    PubMed Central

    da Costa, Cássia Fré; Samesima, Nelson; Pastore, Carlos Alberto

    2017-01-01

    Background Different methodologies for electrocardiographic acquisition in horses have been used since the first ECG recordings in equines were reported early in the last century. This study aimed to determine the best ECG electrodes positioning method and the most reliable calculation of mean cardiac axis (MEA) in equines. Materials and Methods We evaluated the electrocardiographic profile of 53 clinically healthy Thoroughbreds, 38 males and 15 females, with ages ranging 2–7 years old, all reared at the São Paulo Jockey Club, in Brazil. Two ECG tracings were recorded from each animal, one using the Dubois lead positioning system, the second using the base-apex method. QRS complex amplitudes were analyzed to obtain MEA values in the frontal plane for each of the two electrode positioning methods mentioned above, using two calculation approaches, the first by Tilley tables and the second by trigonometric calculation. Results were compared between the two methods. Results There was significant difference in cardiac axis values: MEA obtained by the Tilley tables was +135.1° ± 90.9° vs. -81.1° ± 3.6° (p<0.0001), and by trigonometric calculation it was -15.0° ± 11.3° vs. -79.9° ± 7.4° (p<0.0001), base-apex and Dubois, respectively. Furthermore, Dubois method presented small range of variation without statistical or clinical difference by either calculation mode, while there was a wide variation in the base-apex method. Conclusion Dubois improved centralization of the Thoroughbreds' hearts, engendering what seems to be the real frontal plane. By either calculation mode, it was the most reliable methodology to obtain cardiac mean electrical axis in equines. PMID:28095442

  3. Earthquake hazard analysis for the different regions in and around Ağrı

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg–Richter magnitude–frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency–magnitude Gutenberg–Richter relationship, from the maximum likelihoodmore » method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.« less

  4. Assessing value-based health care delivery for haemodialysis.

    PubMed

    Parra, Eduardo; Arenas, María Dolores; Alonso, Manuel; Martínez, María Fernanda; Gamen, Ángel; Aguarón, Juan; Escobar, María Teresa; Moreno-Jiménez, José María; Alvarez-Ude, Fernando

    2017-06-01

    Disparities in haemodialysis outcomes among centres have been well-documented. Besides, attempts to assess haemodialysis results have been based on non-comprehensive methodologies. This study aimed to develop a comprehensive methodology for assessing haemodialysis centres, based on the value of health care. The value of health care is defined as the patient benefit from a specific medical intervention per monetary unit invested (Value = Patient Benefit/Cost). This study assessed the value of health care and ranked different haemodialysis centres. A nephrology quality management group identified the criteria for the assessment. An expert group composed of stakeholders (patients, clinicians and managers) agreed on the weighting of each variable, considering values and preferences. Multi-criteria methodology was used to analyse the data. Four criteria and their weights were identified: evidence-based clinical performance measures = 43 points; yearly mortality = 27 points; patient satisfaction = 13 points; and health-related quality of life = 17 points (100-point scale). Evidence-based clinical performance measures included five sub-criteria, with respective weights, including: dialysis adequacy; haemoglobin concentration; mineral and bone disorders; type of vascular access; and hospitalization rate. The patient benefit was determined from co-morbidity-adjusted results and corresponding weights. The cost of each centre was calculated as the average amount expended per patient per year. The study was conducted in five centres (1-5). After adjusting for co-morbidity, value of health care was calculated, and the centres were ranked. A multi-way sensitivity analysis that considered different weights (10-60% changes) and costs (changes of 10% in direct and 30% in allocated costs) showed that the methodology was robust. The rankings: 4-5-3-2-1 and 4-3-5-2-1 were observed in 62.21% and 21.55%, respectively, of simulations, when weights were varied by 60%. Value assessments may integrate divergent stakeholder perceptions, create a context for improvement and aid in policy-making decisions. © 2015 John Wiley & Sons, Ltd.

  5. Determination of redox potentials for the Watson-Crick base pairs, DNA nucleosides, and relevant nucleoside analogues.

    PubMed

    Crespo-Hernandez, Carlos E; Close, David M; Gorb, Leonid; Leszczynski, Jerzy

    2007-05-17

    Redox potentials for the DNA nucleobases and nucleosides, various relevant nucleoside analogues, Watson-Crick base pairs, and seven organic dyes are presented based on DFT/B3LYP/6-31++G(d,p) and B3YLP/6-311+G(2df,p)//B3LYP/6-31+G* levels of calculations. The values are determined from an experimentally calibrated set of equations that correlate the vertical ionization (electron affinity) energy of 20 organic molecules with their experimental reversible oxidation (reduction) potential. Our results are in good agreement with those estimated experimentally for the DNA nucleosides in acetonitrile solutions (Seidel et al. J. Phys. Chem. 1996, 100, 5541). We have found that nucleosides with anti conformation exhibit lower oxidation potentials than the corresponding syn conformers. The lowering in the oxidation potential is due to the formation of an intramolecular hydrogen bonding interaction between the 5'-OH group of the sugar and the N3 of the purine bases or C2=O of the pyrimidine bases in the syn conformation. Pairing of adenine or guanine with its complementary pyrimidine base decreases its oxidation potential by 0.15 or 0.28 V, respectively. The calculated energy difference between the oxidation potential for the G.C base pair and that of the guanine base is in good agreement with the experimental value estimated recently (0.34 V: Caruso, T.; et al. J. Am. Chem. Soc. 2005, 127, 15040). The complete and consistent set of reversible redox values determined in this work for the DNA constituents is expected to be of considerable value to those studying charge and electronic energy transfer in DNA.

  6. Energy hyperspace for stacking interaction in AU/AU dinucleotide step: Dispersion-corrected density functional theory study.

    PubMed

    Mukherjee, Sanchita; Kailasam, Senthilkumar; Bansal, Manju; Bhattacharyya, Dhananjay

    2014-01-01

    Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3'-endo sugars and this demands C1'-C1' distance of about 5.4 Å along the chains. Consideration of an energy penalty term for deviation of C1'-C1' distance from the mean value, to the recent DFT-D functionals, specifically ωB97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. © 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014. Copyright © 2013 Wiley Periodicals, Inc.

  7. Rationalising pKa shifts in Bacillus circulans xylanase with computational studies.

    PubMed

    Xiao, Kela; Yu, Haibo

    2016-11-09

    Bacillus circulans xylanase (BcX), a family 11 glycoside hydrolase, catalyses the hydrolysis of xylose polymers with a net retention of stereochemistry. Glu172 in BcX is believed to act as a general acid by protonating the aglycone during glycosylation, and then as a general base to facilitate the deglycosylation step. The key to the dual role of this general acid/base lies in its protonation states, which depend on its intrinsic pK a value and the specific environment which it resides within. To fully understand the detailed molecular features in BcX to establish the dual role of Glu172, we present a combined study based on both atomistic simulations and empirical models to calculate pK a shifts for the general acid/base Glu172 in BcX at different functional states. Its pK a values and those of nearby residues, obtained based on QM/MM free energy calculations, MCCE and PROPKA, show a good agreement with available experimental data. Additionally, our study provides additional insights into the effects of structural and electrostatic perturbations caused by mutations and chemical modifications, suggesting that the local solvation environment and mutagenesis of the residues adjacent to Glu172 establish its dual role during hydrolysis. The strengths and limitations of various methods for calculating pK a s and pK a shifts have also been discussed.

  8. HbA1c values calculated from blood glucose levels using truncated Fourier series and implementation in standard SQL database language.

    PubMed

    Temsch, W; Luger, A; Riedl, M

    2008-01-01

    This article presents a mathematical model to calculate HbA1c values based on self-measured blood glucose and past HbA1c levels, thereby enabling patients to monitor diabetes therapy between scheduled checkups. This method could help physicians to make treatment decisions if implemented in a system where glucose data are transferred to a remote server. The method, however, cannot replace HbA1c measurements; past HbA1c values are needed to gauge the method. The mathematical model of HbA1c formation was developed based on biochemical principles. Unlike an existing HbA1c formula, the new model respects the decreasing contribution of older glucose levels to current HbA1c values. About 12 standard SQL statements embedded in a php program were used to perform Fourier transform. Regression analysis was used to gauge results with previous HbA1c values. The method can be readily implemented in any SQL database. The predicted HbA1c values thus obtained were in accordance with measured values. They also matched the results of the HbA1c formula in the elevated range. By contrast, the formula was too "optimistic" in the range of better glycemic control. Individual analysis of two subjects improved the accuracy of values and reflected the bias introduced by different glucometers and individual measurement habits.

  9. [Development and practice evaluation of blood acid-base imbalance analysis software].

    PubMed

    Chen, Bo; Huang, Haiying; Zhou, Qiang; Peng, Shan; Jia, Hongyu; Ji, Tianxing

    2014-11-01

    To develop a blood gas, acid-base imbalance analysis computer software to diagnose systematically, rapidly, accurately and automatically determine acid-base imbalance type, and evaluate the clinical application. Using VBA programming language, a computer aided diagnostic software for the judgment of acid-base balance was developed. The clinical data of 220 patients admitted to the Second Affiliated Hospital of Guangzhou Medical University were retrospectively analyzed. The arterial blood gas [pH value, HCO(3)(-), arterial partial pressure of carbon dioxide (PaCO₂)] and electrolytes included data (Na⁺ and Cl⁻) were collected. Data were entered into the software for acid-base imbalances judgment. At the same time the data generation was calculated manually by H-H compensation formula for determining the type of acid-base imbalance. The consistency of judgment results from software and manual calculation was evaluated, and the judgment time of two methods was compared. The clinical diagnosis of the types of acid-base imbalance for the 220 patients: 65 cases were normal, 90 cases with simple type, mixed type in 41 cases, and triplex type in 24 cases. The accuracy of the judgment results of the normal and triplex types from computer software compared with which were calculated manually was 100%, the accuracy of the simple type judgment was 98.9% and 78.0% for the mixed type, and the total accuracy was 95.5%. The Kappa value of judgment result from software and manual judgment was 0.935, P=0.000. It was demonstrated that the consistency was very good. The time for software to determine acid-base imbalances was significantly shorter than the manual judgment (seconds:18.14 ± 3.80 vs. 43.79 ± 23.86, t=7.466, P=0.000), so the method of software was much faster than the manual method. Software judgment can replace manual judgment with the characteristics of rapid, accurate and convenient, can improve work efficiency and quality of clinical doctors and has great clinical application promotion value.

  10. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  11. 49 CFR 536.5 - Trading infrastructure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the certified and reported CAFE data provided by the Environmental Protection Agency for enforcement of the CAFE program pursuant to 49 U.S.C. 32904(e). Credit values are calculated based on the CAFE...

  12. 49 CFR 536.5 - Trading infrastructure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the certified and reported CAFE data provided by the Environmental Protection Agency for enforcement of the CAFE program pursuant to 49 U.S.C. 32904(e). Credit values are calculated based on the CAFE...

  13. Polarization-sensitive optical coherence tomography measurements with different phase modulation amplitude when using continuous polarization modulation

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.

    2012-01-01

    We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.

  14. A revised version of Graphic Normative Analysis Program (GNAP) with examples of petrologic problem solving

    USGS Publications Warehouse

    Stuckless, J.S.; VanTrump, G.

    1979-01-01

    A revised version of Graphic Normative Analysis Program (GNAP) has been developed to allow maximum flexibility in the evaluation of chemical data by the occasional computer user. GNAP calculates ClPW norms, Thornton and Tuttle's differentiation index, Barth's cations, Niggli values and values for variables defined by the user. Calculated values can be displayed graphically in X-Y plots or ternary diagrams. Plotting can be done on a line printer or Calcomp plotter with either weight percent or mole percent data. Modifications in the original program give the user some control over normative calculations for each sample. The number of user-defined variables that can be created from the data has been increased from ten to fifteen. Plotting and calculations can be based on the original data, data adjusted to sum to 100 percent, or data adjusted to sum to 100 percent without water. Analyses for which norms were previously not computable are now computed with footnotes that show excesses or deficiencies in oxides (or volatiles) not accounted for by the norm. This report contains a listing of the computer program, an explanation of the use of the program, and the two sample problems.

  15. Stochastic-analytic approach to the calculation of multiply scattered lidar returns

    NASA Astrophysics Data System (ADS)

    Gillespie, D. T.

    1985-08-01

    The problem of calculating the nth-order backscattered power of a laser firing short pulses at time zero into an homogeneous cloud with specified scattering and absorption parameters, is discussed. In the problem, backscattered power is measured at any time less than zero by a small receiver colocated with the laser and fitted with a forward looking conical baffle. Theoretical calculations are made on the premise that the laser pulse is composed of propagating photons which are scattered and absorbed by the cloud particles in a probabilistic manner. The effect of polarization was not taken into account in the calculations. An exact formula is derived for backscattered power, based on direct physical arguments together with a rigorous analysis of random variables. It is shown that, for values of n less than or equal to 2, the obtained formula is a well-behaved (3n-4) dimensionless integral. The computational feasibility of the integral formula is demonstrated for a model cloud of isotropically scattering particles. An analytical formula is obtained for a value of n = 2, and a Monte Carlo program was used to obtain numerical results for values of n = 3, . . ., 6.

  16. HUMAN BODY SHAPE INDEX BASED ON AN EXPERIMENTALLY DERIVED MODEL OF HUMAN GROWTH

    PubMed Central

    Lebiedowska, Maria K.; Alter, Katharine E.; Stanhope, Steven J.

    2009-01-01

    Objectives To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5–18 years; to use the derived growth models to establish a new Human Body Shape Index (HBSI) based on natural age related changes in HBS; and to compare various metrics of relative body weight (body mass index, ponderal index, HBSI) in a sample of 5–18 year old children. Study design Non-disabled Polish children (N=847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex with the allometric equation M= miHχ. HBSI and HBSI were calculated separately for girls and boys, using sex-specific values for χ and a general HBSI from combined data. The customary body mass and ponderal indices were calculated and compared to HBSI values. Results The models of growth were M=13.11H2.84 (R2=.90) and M=13.64H2.68 (R2=.91) for girls and boys respectively. HBSI values contained less inherent variability and were influenced least by growth (age and height) than customary indices. Conclusion Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of human body shape formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for the characterization of human body shape in children during the formative growth years. PMID:18154897

  17. Human body shape index based on an experimentally derived model of human growth.

    PubMed

    Lebiedowska, Maria K; Alter, Katharine E; Stanhope, Steven J

    2008-01-01

    To test the assumption of geometrically similar growth by developing experimentally derived models of human body growth during the age interval of 5 to 18 years; to use these derived growth models to establish a new human body shape index (HBSI) based on natural age-related changes in human body shape (HBS); and to compare various metrics of relative body weight (body mass index [BMI], ponderal index [PI], and HBSI) in a sample of 5- to 18-year-old children. Nondisabled Polish children (n = 847) participated in this descriptive study. To model growth, the best fit between body height (H) and body mass (M) was calculated for each sex using the allometric equation M = m(i) H(chi). HBSI was calculated separately for girls and boys, using sex-specific values for chi and a general HBSI from combined data. The customary BMI and PI were calculated and compared with HBSI values. The models of growth were M = 13.11H(2.84) (R2 = 0.90) for girls and M = 13.64H(2.68) (R2 = 0.91) for boys. HBSI values contained less inherent variability and were less influenced by growth (age and height) compared with BMI and PI. Age-related growth during childhood is sex-specific and not geometrically similar. Therefore, indices of HBS formulated from experimentally derived models of human growth are superior to customary geometric similarity-based indices for characterizing HBS in children during the formative growth years.

  18. Flexibility and Project Value: Interactions and Multiple Real Options

    NASA Astrophysics Data System (ADS)

    Čulík, Miroslav

    2010-06-01

    This paper is focused on a project valuation with embedded portfolio of real options including their interactions. Valuation is based on the criterion of Net Present Value on the simulation basis. Portfolio includes selected types of European-type real options: option to expand, contract, abandon and temporarily shut down and restart a project. Due to the fact, that in reality most of the managerial flexibility takes the form of portfolio of real options, selected types of options are valued not only individually, but also in combination. The paper is structured as follows: first, diffusion models for forecasting of output prices and variable costs are derived. Second, project value is estimated on the assumption, that no real options are present. Next, project value is calculated with the presence of selected European-type options; these options and their impact on project value are valued first in isolation and consequently in different combinations. Moreover, intrinsic value evolution of given real options with respect to the time of exercising is analysed. In the end, results are presented graphically; selected statistics and risk measures (Value at Risk, Expected Shortfall) of the NPV's distributions are calculated and commented.

  19. Structure and dynamics of ND3BF3 in the solid and gas phases: a combined NMR, neutron diffraction, and Ab initio study.

    PubMed

    Penner, Glenn H; Ruscitti, Bruno; Reynolds, Julie; Swainson, Ian

    2002-12-30

    The decrease in D-->A bond lengths, previously reported for some Lewis acid/base complexes, in going from the gas to the solid phases is investigated by obtaining an accurate crystal structure of solid ND(3)BF(3) by powder neutron diffraction. The B-N internuclear distance is 1.554(3) A, 0.118 A shorter than the most recent gas-phase microwave value and 0.121 A shorter than the single molecule geometry optimized (1.672 A, CISD/6-311++G(d,p)) bond length. The crystal structure also shows N-D.F-B hydrogen bonds. The effects of this change in structure and of intermolecular hydrogen-bonding on nuclear magnetic shielding (i.e., chemical shifts) and the nuclear quadrupolar coupling constants (QCC) are investigated by ab initio molecular orbital and density functional theory calculations. These calculations show that the nitrogen ((15)N and (14)N) and boron ((11)B and (10)B) chemical shifts should be rather insensitive to changes in r(BN) and that the concomitant changes in molecular structure. Calculations on hydrogen-bonded clusters, based on the crystal structure, indicate that H-bonding should also have very little effect on the chemical shifts. On the other hand, the (11)B and (14)N QCCs show large changes because of both effects. An analysis of the (10)B[(19)F] line shape in solid ND(3)(10)BF(3) yields a (11)B QCC of +/-0.130 MHz. This is reasonably close an earlier value of +/-0.080 MHz and the value of +/-0.050 MHz calculated for a [NH(3)BF(3)](4) cluster. The gas-phase value is 1.20 MHz. Temperature-dependent deuterium T(1) measurements yield an activation energy for rotation of the ND(3) group in solid ND(3)BF(3) of 9.5 +/- 0.1 kJ/mol. Simulations of the temperature-dependent T(1) anisotropy gave an E(a) of 9.5 +/- 0.2 kJ/mol and a preexponential factor, A, of 3.0 +/- 0.1 x 10(12) s(-)(1). Our calculated value for a [NH(3)BF(3)](4) cluster is 16.4 kJ/mol. Both are much higher than the previous value of 3.9 kJ/mol, from solid-state proton T(1) measurements.

  20. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  1. 40 CFR 600.002-93 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... traveled by an automobile or group of automobiles per volume of fuel consumed as computed in § 600.113 or § 600.207; or (ii) The equivalent petroleum-based fuel economy for an electrically powered automobile as... means the equivalent petroleum-based fuel economy value as determined by the calculation procedure...

  2. 49 CFR Appendix B to Part 236 - Risk Assessment Criteria

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., exposure scenarios, and consequences that are related as described in this part. For the full risk... subsystem or component in the risk assessment. (f) How are processor-based subsystems/components assessed? (1) An MTTHE value must be calculated for each processor-based subsystem or component, or both...

  3. Identification of the numerical model of FEM in reference to measurements in situ

    NASA Astrophysics Data System (ADS)

    Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa

    2018-01-01

    The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).

  4. Combined photoelectron, collision-induced dissociation, and computational studies of parent and fragment anions of N-paranitrophenylsulfonylalanine and N-paranitrophenylalanine

    NASA Astrophysics Data System (ADS)

    Lambert, Jason; Chen, Jing; Buonaugurio, Angela; Bowen, Kit H.; Do-Thanh, Chi-Linh; Wang, Yilin; Best, Michael D.; Compton, R. N.; Sommerfeld, Thomas

    2013-12-01

    After synthesizing the compounds N-paranitrophenylsulfonylalanine (NPNPSA) and N-paranitrophenylalanine (NPNPA), the photoelectron spectrum of the valence anion of N-paranitrophenylsulfonylalanine (NPNPSA)-, was measured and the collision-induced dissociation (CID) pathways of deprotonated N-paranitrophenylsulfonylalanine (NPNPSA-H)- and deprotonated N-paranitrophenylalanine (NPNPA-H)- were determined. Pertinent calculations were conducted to analyze both sets of experimental data. From the valence anion photoelectron spectrum of (NPNPSA)-, the adiabatic electron affinity (AEA) of NPNPSA was determined to be 1.7 ± 0.1 eV, while the vertical detachment energy (VDE) of (NPNPSA)- was found to be 2.3 ± 0.1 eV. Calculations for four low lying conformers of (NPNPSA)- gave AEA values in the range of 1.6-2.1 eV and VDE values in the range of 2.0-2.4 eV. These calculations are in very good agreement with the experimental values. While the NPNPA anion (NPNPSA)- was not observed experimentally it was studied computationally. The six low lying (NPNPSA)- conformers were identified and calculated to have AEA values in the range of 0.7-1.2 eV and VDE values in the range of 0.9-1.6 eV. CID was used to study the fragmentation patterns of deprotonated NPNPA and deprotonated NPNPSA. Based on the CID data and calculations, the excess charge was located on the delocalized π-orbitals of the nitrobenzene moiety. This is made evident by the fact that the dominant fragments all contained the nitrobenzene moiety even though the parent anions used for the CID study were formed via deprotonation of the carboxylic acid. The dipole-bound anions of both molecules are studied theoretically using the results of previous studies on nitrobenzene as a reference.

  5. Combined photoelectron, collision-induced dissociation, and computational studies of parent and fragment anions of N-paranitrophenylsulfonylalanine and N-paranitrophenylalanine.

    PubMed

    Lambert, Jason; Chen, Jing; Buonaugurio, Angela; Bowen, Kit H; Do-Thanh, Chi-Linh; Wang, Yilin; Best, Michael D; Compton, R N; Sommerfeld, Thomas

    2013-12-14

    After synthesizing the compounds N-paranitrophenylsulfonylalanine (NPNPSA) and N-paranitrophenylalanine (NPNPA), the photoelectron spectrum of the valence anion of N-paranitrophenylsulfonylalanine (NPNPSA)(-), was measured and the collision-induced dissociation (CID) pathways of deprotonated N-paranitrophenylsulfonylalanine (NPNPSA-H)(-) and deprotonated N-paranitrophenylalanine (NPNPA-H)(-) were determined. Pertinent calculations were conducted to analyze both sets of experimental data. From the valence anion photoelectron spectrum of (NPNPSA)(-), the adiabatic electron affinity (AEA) of NPNPSA was determined to be 1.7 ± 0.1 eV, while the vertical detachment energy (VDE) of (NPNPSA)(-) was found to be 2.3 ± 0.1 eV. Calculations for four low lying conformers of (NPNPSA)(-) gave AEA values in the range of 1.6-2.1 eV and VDE values in the range of 2.0-2.4 eV. These calculations are in very good agreement with the experimental values. While the NPNPA anion (NPNPSA)(-) was not observed experimentally it was studied computationally. The six low lying (NPNPSA)(-) conformers were identified and calculated to have AEA values in the range of 0.7-1.2 eV and VDE values in the range of 0.9-1.6 eV. CID was used to study the fragmentation patterns of deprotonated NPNPA and deprotonated NPNPSA. Based on the CID data and calculations, the excess charge was located on the delocalized π-orbitals of the nitrobenzene moiety. This is made evident by the fact that the dominant fragments all contained the nitrobenzene moiety even though the parent anions used for the CID study were formed via deprotonation of the carboxylic acid. The dipole-bound anions of both molecules are studied theoretically using the results of previous studies on nitrobenzene as a reference.

  6. Aortic Curvature Instead of Angulation Allows Improved Estimation of the True Aorto-iliac Trajectory.

    PubMed

    Schuurmann, R C L; Kuster, L; Slump, C H; Vahl, A; van den Heuvel, D A F; Ouriel, K; de Vries, J-P P M

    2016-02-01

    Supra- and infrarenal aortic neck angulation have been associated with complications after endovascular aortic aneurysm repair. However, a uniform angulation measurement method is lacking and the concept of angulation suggests a triangular oversimplification of the aortic anatomy. (Semi-)automated calculation of curvature along the center luminal line describes the actual trajectory of the aorta. This study proposes a methodology for calculating aortic (neck) curvature and suggests an additional method based on available tools in current workstations: curvature by digital calipers (CDC). Proprietary custom software was developed for automatic calculation of the severity and location of the largest supra- and infrarenal curvature over the center luminal line. Twenty-four patients with severe supra- or infrarenal angulations (≥45°) and 11 patients with small to moderate angulations (<45°) were included. Both CDC and angulation were measured by two independent observers on the pre- and postoperative computed tomographic angiography scans. The relationships between actual curvature and CDC and angulation were visualized and tested with Pearson's correlation coefficient. The CDC was also fully automatically calculated with proprietary custom software. The difference between manual and automatic determination of CDC was tested with a paired Student t test. A p-value was considered significant when two-tailed α < .05. The correlation between actual curvature and manual CDC is strong (.586-.962) and even stronger for automatic CDC (.865-.961). The correlation between actual curvature and angulation is much lower (.410-.737). Flow direction angulation values overestimate CDC measurements by 60%, with larger variance. No significant difference was found in automatically calculated CDC values and manually measured CDC values. Curvature calculation of the aortic neck improves determination of the true aortic trajectory. Automatic calculation of the actual curvature is preferable, but measurement or calculation of the curvature by digital calipers is a valid alternative if actual curvature is not at hand. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  7. The DEP-6D, a new preference-based measure to assess health states of dependency.

    PubMed

    Rodríguez-Míguez, E; Abellán-Perpiñán, J M; Alvarez, X C; González, X M; Sampayo, A R

    2016-03-01

    In medical literature there are numerous multidimensional scales to measure health states for dependence in activities of daily living. However, these scales are not preference-based and are not able to yield QALYs. On the contrary, the generic preference-based measures are not sensitive enough to measure changes in dependence states. The objective of this paper is to propose a new dependency health state classification system, called DEP-6D, and to estimate its value set in such a way that it can be used in QALY calculations. DEP-6D states are described as a combination of 6 attributes (eat, incontinence, personal care, mobility, housework and cognition problems), with 3-4 levels each. A sample of 312 Spanish citizens was surveyed in 2011 to estimate the DEP-6D preference-scoring algorithm. Each respondent valued six out of the 24 states using time trade-off questions. After excluding those respondents who made two or more inconsistencies (6% out of the sample), each state was valued between 66 and 77 times. The responses present a high internal and external consistency. A random effect model accounting for main effects was the preferred model to estimate the scoring algorithm. The DEP-6D describes, in general, more severe problems than those usually described by means of generic preference-based measures. The minimum score predicted by the DEP-6D algorithm is -0.84, which is considerably lower than the minimum value predicted by the EQ-5D and SF-6D algorithms. The DEP-6D value set is based on community preferences. Therefore it is consistent with the so-called 'societal perspective'. Moreover, DEP-6D preference weights can be used in QALY calculations and cost-utility analysis. Copyright © 2016. Published by Elsevier Ltd.

  8. Diagnosis of skin cancer using image processing

    NASA Astrophysics Data System (ADS)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  9. Uncertainties Associated with Theoretically Calculated N2-Broadened Half-Widths of H2O Lines

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.; Gamache, R. R.

    2010-01-01

    With different choices of the cut-offs used in theoretical calculations, we have carried out extensive numerical calculations of the N2-broadend Lorentzian half-widths of the H2O lines using the modified Robert-Bonamy formalism. Based on these results, we are able to thoroughly check for convergence. We find that, with the low-order cut-offs commonly used in the literature, one is able to obtain converged values only for lines with large half-widths. Conversely, for lines with small half-widths, much higher cut-offs are necessary to guarantee convergence. We also analyse the uncertainties associated with calculated half-widths, and these are correlated as above. In general, the smaller the half-widths, the poorer the convergence and the larger the uncertainty associated with them. For convenience, one can divide all H2O lines into three categories, large, intermediate, and small, according to their half-width values. One can use this division to judge whether the calculated half-widths are converged or not, based on the cut-offs used, and also to estimate how large their uncertainties are. We conclude that with the current Robert- Bonamy formalism, for lines in category lone can achieve the accuracy requirement set by HITRAN, whereas for lines in category 3, it 'is impossible to meet this goal.

  10. Research on the optical spectra, g factors and defect structures for two tetragonal Y²+ centers in the irradiated CaF₂: Y crystal.

    PubMed

    Zheng, Wen-Chen; Mei, Yang; Yang, Yu-Guang; Liu, Hong-Gang

    2012-11-01

    Based on the defect models that the tetragonal Y(2+) (1) center in the irradiated CaF(2): Y crystal is due to Y(2+) at Ca(2+) site associated with a nearest interstitial F(-) ion along C(4) axis and the tetragonal Y(2+) (2) center is Y(2+) at Ca(2+) site where the tetragonal distortion is caused by the static Jahn-Teller effect, the two optical spectral bands and anisotropic g factors for both tetragonal Y(2+) centers are calculated. The calculations are made by using two methods based on the cluster approach, one is the complete diagonalization (of energy matrix) method (CDM) and another is the perturbation theory method (PTM). The calculated results for each Y(2+) center from CDM and PTM coincide and show reasonable agreement with the experimental values. The calculated isotropic g factor for Y(2+) (2) center at higher temperature owing to the dynamical Jahn-Teller effect is also consistent with the observed value. The defect structures (i.e., tetragonal distortion) of the two Y(2+) centers are obtained from the calculation. It appears that both theoretical methods can be applied to explain the optical and EPR data, to study the defect model and to determine the defect structures for d(1) ions in crystals. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    NASA Astrophysics Data System (ADS)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  12. Gsolve, a Python computer program with a graphical user interface to transform relative gravity survey measurements to absolute gravity values and gravity anomalies

    NASA Astrophysics Data System (ADS)

    McCubbine, Jack; Tontini, Fabio Caratori; Stagpoole, Vaughan; Smith, Euan; O'Brien, Grant

    2018-01-01

    A Python program (Gsolve) with a graphical user interface has been developed to assist with routine data processing of relative gravity measurements. Gsolve calculates the gravity at each measurement site of a relative gravity survey, which is referenced to at least one known gravity value. The tidal effects of the sun and moon, gravimeter drift and tares in the data are all accounted for during the processing of the survey measurements. The calculation is based on a least squares formulation where the difference between the absolute gravity at each surveyed location and parameters relating to the dynamics of the gravimeter are minimized with respect to the relative gravity observations, and some supplied gravity reference site values. The program additionally allows the user to compute free air gravity anomalies, with respect to the GRS80 and GRS67 reference ellipsoids, from the determined gravity values and calculate terrain corrections at each of the surveyed sites using a prism formula and a user supplied digital elevation model. This paper reviews the mathematical framework used to reduce relative gravimeter survey observations to gravity values. It then goes on to detail how the processing steps can be implemented using the software.

  13. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    PubMed

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  14. 25 CFR 39.206 - How does OIEP calculate the value of one WSU?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false How does OIEP calculate the value of one WSU? 39.206 Section 39.206 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL... calculate the value of one WSU? (a) To calculate the appropriated dollar value of one WSU, OIEP divides the...

  15. 25 CFR 39.206 - How does OIEP calculate the value of one WSU?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false How does OIEP calculate the value of one WSU? 39.206 Section 39.206 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL... calculate the value of one WSU? (a) To calculate the appropriated dollar value of one WSU, OIEP divides the...

  16. Quantitative real-time monitoring of dryer effluent using fiber optic near-infrared spectroscopy.

    PubMed

    Harris, S C; Walker, D S

    2000-09-01

    This paper describes a method for real-time quantitation of the solvents evaporating from a dryer. The vapor stream in the vacuum line of a dryer was monitored in real time using a fiber optic-coupled acousto-optic tunable filter near-infrared (AOTF-NIR) spectrometer. A balance was placed in the dryer, and mass readings were recorded for every scan of the AOTF-NIR. A partial least-squares (PLS) calibration was subsequently built based on change in mass over change in time for solvents typically used in a chemical manufacturing plant. Controlling software for the AOTF-NIR was developed. The software collects spectra, builds the PLS calibration model, and continuously fits subsequently collected spectra to the calibration, allowing the operator to follow the mass loss of solvent from the dryer. The results indicate that solvent loss can be monitored and quantitated in real time using NIR for the optimization of drying times. These time-based mass loss values have also been used to calculate "dynamic" vapor density values for the solvents. The values calculated are in agreement with values determined from the ideal gas law and could prove valuable as tools to measure temperature or pressure indirectly.

  17. From Urey To The Ocean's Glacial Ph: News From The Boron-11 Paleo-acidimetry.

    NASA Astrophysics Data System (ADS)

    Zeebe, R. E.; Wolf-Gladrow, D. A.; Bijma, J.

    Boron paleo-acidimetry is based on the stable boron isotope composition of foraminiferal shells which has been shown to be a function of seawater pH. It is cur- rently one of the most promising paleo-carbonate chemistry proxies. One important parameter of the proxy is the equilibrium fractionation between the dissolved boron species B(OH)3 and B(OH)- which was calculated to be 19 per mil at 25C by Kak- 4 ihana and Kotaka (1977), based on Urey's theory. The calculated equilibrium frac- tionation, however, depends on the vibrational frequencies of the molecules for which different values have been reported in the literature. We have recalculated the equilib- rium fractionation and find that it may be distinctly different from 19 per mil (this is the bad news). The good news is that - theoretically - the use of 11B as a paleo-pH indicator is not compromised through vital effects in planktonic foraminifera. We de- rive this conclusion by the use of a diffusion-reaction model that calculates pH profiles and 11B values in the vicinity of a foraminifer.

  18. Exchange coupling and magnetic anisotropy of exchanged-biased quantum tunnelling single-molecule magnet Ni3Mn2 complexes using theoretical methods based on Density Functional Theory.

    PubMed

    Gómez-Coca, Silvia; Ruiz, Eliseo

    2012-03-07

    The magnetic properties of a new family of single-molecule magnet Ni(3)Mn(2) complexes were studied using theoretical methods based on Density Functional Theory (DFT). The first part of this study is devoted to analysing the exchange coupling constants, focusing on the intramolecular as well as the intermolecular interactions. The calculated intramolecular J values were in excellent agreement with the experimental data, which show that all the couplings are ferromagnetic, leading to an S = 7 ground state. The intermolecular interactions were investigated because the two complexes studied do not show tunnelling at zero magnetic field. Usually, this exchange-biased quantum tunnelling is attributed to the presence of intermolecular interactions calculated with the help of theoretical methods. The results indicate the presence of weak intermolecular antiferromagnetic couplings that cannot explain the ferromagnetic value found experimentally for one of the systems. In the second part, the goal is to analyse magnetic anisotropy through the calculation of the zero-field splitting parameters (D and E), using DFT methods including the spin-orbit effect.

  19. Calculation of Compressible Flows past Aerodynamic Shapes by Use of the Streamline Curvature

    NASA Technical Reports Server (NTRS)

    Perl, W

    1947-01-01

    A simple approximate method is given for the calculation of isentropic irrotational flows past symmetrical airfoils, including mixed subsonic-supersonic flows. The method is based on the choice of suitable values for the streamline curvature in the flow field and the subsequent integration of the equations of motion. The method yields limiting solutions for potential flow. The effect of circulation is considered. A comparison of derived velocity distributions with existing results that are based on calculation to the third order in the thickness ratio indicated satisfactory agreement. The results are also presented in the form of a set of compressibility correction rules that lie between the Prandtl-Glauert rule and the von Karman-Tsien rule (approximately). The different rules correspond to different values of the local shape parameter square root sign YC sub a, in which Y is the ordinate and C sub a is the curvature at a point on an airfoil. Bodies of revolution, completely supersonic flows, and the significance of the limiting solutions for potential flow are also briefly discussed.

  20. Study on bridge checking evaluation based on deformation-Stress data

    NASA Astrophysics Data System (ADS)

    Shi, Jing Xian; Cheng, Ying Jie

    2018-06-01

    Bridge structure plays a very important role in human traffic. The evaluation of bridge structure after a certain period of operation has always been the focus of the bridge. Based on the data collected from the health inspection system of a continuous rigid frame bridge on a highway in Yunnan, China, it is found that there is a certain linear relationship between the deformation and stress of the bridge structure. In view of a specific section of the structure, the stress value of this section can be derived according to its deformation value. The coefficient K can be calculated by comparing the estimated value to the actual measured value. According to the range of the K value, the structural state of the bridge can be evaluated to a certain extent.

  1. Mass attenuation coefficient of binderless, pre-treated and tannin-based Rhizophora spp. particleboards using 16.59 - 25.26 keV photon energy range

    NASA Astrophysics Data System (ADS)

    Mohd Yusof, Mohd Fahmi; Hamid, Puteri Nor Khatijah Abdul; Bauk, Sabar; Hashim, Rokiah; Tajuddin, Abdul Aziz

    2015-04-01

    The Rhizophora spp. particleboards were fabricated using ≤ 104 µm particle size at three different fabrication methods; binderless, steam pre-treated and tannin-added. The mass attenuation coefficient of Rhizophora spp. particleboards were measured using x-ray fluorescent (XRF) photon from niobium, molybdenum, palladium, silver and tin metal plates that provided photon energy between 16.59 to 25.26 keV. The results were compared to theoretical values for water calculated using photon cross-section database (XCOM).The results showed that all Rhizophora spp. particleboards having mass attenuation coefficient close to calculated XCOM for water. Tannin-added Rizophora spp. particleboard was nearest to calculated XCOM for water with χ2 value of 13.008 followed by binderless Rizophora spp. (25.859) and pre-treated Rizophora spp. (91.941).

  2. 30 CFR 206.105 - What records must I keep to support my calculations of value under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... you to use a different value if it determines that the reported value is inconsistent with the... calculations of value under this subpart? 206.105 Section 206.105 Mineral Resources MINERALS MANAGEMENT SERVICE... must I keep to support my calculations of value under this subpart? If you determine the value of your...

  3. Electronic, magnetic properties and phase diagrams of system with Fe4N compound: An ab initio calculations and Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Masrour, R.; Jabar, A.; Hlil, E. K.

    2018-05-01

    Self-consistent ab initio calculations, based on Density Functional Theory (DFT) approach and using Full potential Linear Augmented Plane Wave (FLAPW) method, are performed to investigate the electronic and magnetic properties of the Fe4N compound. Polarized spin and spin-orbit coupling are included in calculations within the framework of the ferromagnetic state between Fe(I) and Fe(II) in Fe4N compound. We have used the obtained data from abinitio calculations as an input in Monte Carlo simulation to calculate the magnetic properties of this compounds such as the ground state phase diagrams, total and partial magnetization of Fe(I) and Fe(II) as well as the transition temperatures are computed. The variation of magnetization with the crystal field are also studied. The magnetic hysteresis cycle of the same Fe4N compound are determined for different values of temperatures and crystal field values. The two-step hysteresis loop are evidenced, which is typical for Fe4N structure. The ferromagnetic and superparamagnetic phase is observed as well.

  4. Technical Report for Calculations of Atmospheric Dispersion at Onsite Locations for Department of Energy Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Alan; Chaves, Chris

    2015-04-04

    The Department of Energy (DOE) has performed an evaluation of the technical bases for the default value for the atmospheric dispersion parameter χ/Q. This parameter appears in the calculation of radiological dose at the onsite receptor location (co-located worker at 100 meters) in safety analysis of DOE nuclear facilities. The results of the calculation are then used to determine whether safety significant engineered controls should be established to prevent and/or mitigate the event causing the release of hazardous material. An evaluation of methods for calculation of the dispersion of potential chemical releases for the purpose of estimating the chemical exposuremore » at the co-located worker location was also performed. DOE’s evaluation consisted of: (a) a review of the regulatory basis for the default χ/Q dispersion parameter; (b) an analysis of this parameter’s sensitivity to various factors that affect the dispersion of radioactive material; and (c) performance of additional independent calculations to assess the appropriate use of the default χ/Q value.« less

  5. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  6. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Fei; Zhen, Zhao; Liu, Chun

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  7. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE PAGES

    Wang, Fei; Zhen, Zhao; Liu, Chun; ...

    2017-12-18

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  8. Dynamic baseline detection method for power data network service

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2017-08-01

    This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.

  9. Current risk estimates based on the A-bomb survivors data - a discussion in terms of the ICRP recommendations on the neutron weighting factor.

    PubMed

    Rühm, W; Walsh, L

    2007-01-01

    Currently, most analyses of the A-bomb survivors' solid tumour and leukaemia data are based on a constant neutron relative biological effectiveness (RBE) value of 10 that is applied to all survivors, independent of their distance to the hypocentre at the time of bombing. The results of these analyses are then used as a major basis for current risk estimates suggested by the International Commission on Radiological Protection (ICRP) for use in international safety guidelines. It is shown here that (i) a constant value of 10 is not consistent with weighting factors recommended by the ICRP for neutrons and (ii) it does not account for the hardening of the neutron spectra in Hiroshima and Nagasaki, which takes place with increasing distance from the hypocentres. The purpose of this paper is to present new RBE values for the neutrons, calculated as a function of distance from the hypocentres for both cities that are consistent with the ICRP60 neutron weighting factor. If based on neutron spectra from the DS86 dosimetry system, these calculations suggest values of about 31 at 1000 m and 23 at 2000 m ground range in Hiroshima, while the corresponding values for Nagasaki are 24 and 22. If the neutron weighting factor that is consistent with ICRP92 is used, the corresponding values are about 23 and 21 for Hiroshima and 21 and 20 for Nagasaki, respectively. It is concluded that the current risk estimates will be subject to some changes in view of the changed RBE values. This conclusion does not change significantly if the new doses from the Dosimetry System DS02 are used.

  10. Study of permeability characteristics of membranes

    NASA Technical Reports Server (NTRS)

    Spiegler, K. S.; Messalem, R. M.; Moore, R. J.; Leibovitz, J.

    1971-01-01

    Pressure-permeation experiments were performed with the concentration-clamp cell. Streaming potentials and hydraulic permeabilities were measured for an AMF C-103 cation-exchange membrane bounded by 0.1 N NaCl solutions. The streaming potential calculated from the slope of the recorded potential differences versus the applied pressure, yields a value of 1.895 millivolt/dekabar. When comparison with other membranes of similar characteristics could be made, good agreement was found. The values of the hydraulic permeability varied somewhat with the applied pressure difference and are between 1.3 x 10 to the minus 8th power and 3.9 x 10 to the minus 8th power sq cm/dekabar-sec. The specific hydraulic permeabilities were also calculated and compared with data from the literature. Fair agreement was found. The diffusion coefficient of the chloride ion in the AMF C-103 membrane was calculated, using Fick's first law of diffusion based on ion concentrations calculated from the Donnan equilibrium concentration of Cl(-).

  11. Theoretical calculation of CH3F/N2-broadening coefficients and their temperature dependence

    NASA Astrophysics Data System (ADS)

    Jellali, C.; Maaroufi, N.; Aroui, H.

    2018-07-01

    Using Robert and Bonamy formalism (with parabolic and exact trajectories) based on the semi-classical impact theory, N2-broadening coefficients of methyl fluoride CH3F were calculated for transitions belonging to the PP-, PQ-, PR-, RP-, RQ- and RR- sub-branches of the ν6 perpendicular band near 8.5 μm. The calculations showed the predominance of the dipole-quadruple interaction. The J and K rotational quantum numbers dependencies of the computed coefficients that are consistent with previous measurements were clearly observed in this study. For a fixed value of J, we noticed a decrease in the broadening coefficients, which was more significant at lower J values. In order to deduce the temperature exponent, the N2-broadening coefficients of CH3F were calculated at various temperatures of atmospheric interest between 183 and 296 K with J ≤ 60 and K ≤ 10. These exponents were, in general, J-dependent and K-independent, except for K close to J.

  12. Analysis and recent advances in gamma heating measurements in MINERVE facility by using TLD and OSLD techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amharrak, H.; Di Salvo, J.; Lyoussi, A.

    2011-07-01

    The objective of this study is to develop nuclear heating measurement methods in Zero Power experimental reactors. This paper presents the analysis of Thermo-Luminescent Detector (TLD) and Optically Stimulated Luminescent Detectors (OSLD) experiments in the UO{sub 2} core of the MINERVE research reactor at the CEA Cadarache. The experimental sources of uncertainties on the gamma dose have been reduced by improving the conditions, as well as the repeatability, of the calibration step for each individual TLD. The interpretation of these measurements needs to take into account calculation of cavity correction factors, related to calibration and irradiation configurations, as well asmore » neutron corrections calculations. These calculations are based on Monte Carlo simulations of neutron-gamma and gamma-electron transport coupled particles. TLD and OSLD are positioned inside aluminum pillboxes. The comparison between calculated and measured integral gamma-ray absorbed doses using TLD, shows that calculation slightly overestimates the measurement with a C/E value equal to 1.05 {+-} 5.3 % (k = 2). By using OSLD, the calculation slightly underestimates the measurement with a C/E value equal to 0.96 {+-} 7.0% (k = 2. (authors)« less

  13. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  14. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  15. Reference values for 27 clinical chemistry tests in 70-year-old males and females.

    PubMed

    Carlsson, Lena; Lind, Lars; Larsson, Anders

    2010-01-01

    Reference values are usually defined based on blood samples from healthy men or nonpregnant women in the age range of 20-50 years. These values are not optimal for elderly patients, as many biological markers change over time and adequate reference values are important for correct clinical decisions. To validate NORIP (Nordic Reference Interval Project) reference values in a 70-year-old population. We studied 27 frequently used laboratory tests. The 2.5th and 97.5th percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry on the statistical treatment of reference values. Reference values are reported for plasma alanine aminotransferase, albumin, alkaline phosphatase, pancreas amylase, apolipoprotein A1, apolipoprotein B, aspartate aminotransferase, bilirubin, calcium, chloride, cholesterol, creatinine, creatine kinase, C-reactive protein, glucose, gamma-glutamyltransferase, HDL-cholesterol, iron, lactate dehydrogenase, LDL-cholesterol, magnesium, phosphate, potassium, sodium, transferrin, triglycerides, urate and urea. Reference values calculated from the whole population and a subpopulation without cardiovascular disease showed strong concordance. Several of the reference interval limits were outside the 90% CI of a Scandinavian population (NORIP). 2009 S. Karger AG, Basel.

  16. Preliminary remediation goals for use at the U.S. Department of Energy Oak Ridge Operations Office

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-06-01

    This report presents Preliminary Remediation Goals (PRGs) for use in human health risk assessment efforts under the United States Department of Energy, Oak Ridge Operations Office Environmental Restoration (ER) Division. Chemical-specific PRGs are concentration goals for individual chemicals for specific medium and land use combinations. The PRGs are referred to as risk-based because they have been calculated using risk assessment procedures. Risk-based calculations set concentration limits using both carcinogenic or noncarcinogenic toxicity values under specific exposure pathways. The PRG is a concentration that is derived from a specified excess cancer risk level or hazard quotient. This report provides the ERmore » Division with standardized PRGs which are integral to the Remedial Investigation/Feasibility Study process. By managing the assumptions and systems used in PRG derivation, the Environmental Restoration Risk Assessment Program will be able to control the level of quality assurance associated with these risk-based guideline values.« less

  17. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    NASA Astrophysics Data System (ADS)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  18. Poly(aryl-ether-ether-ketone) as a Possible Metalized Film Capacitor Dielectric: Accurate Description of the Band Gap Through Ab Initio Calculation

    DTIC Science & Technology

    2014-12-01

    from standard HSE06 hybrid functional with α = 0.25 and ω = 0.11 bohr–1 and b) from HSE with α = 0.093 and ω of 0.11 bohr–1...better agreement for the band gap value for future calculations, a systemic study was conducted for the (α, ω) parameter space of the HSE ...orthogonal). Future HSE calculations will be performed with the updated parameters. Fig. 7 Density of States of PEEK based on the optimized

  19. Determination of the Characteristic Values and Variation Ratio for Sensitive Soils

    NASA Astrophysics Data System (ADS)

    Milutinovici, Emilia; Mihailescu, Daniel

    2017-12-01

    In 2008, Romania adopted Eurocode 7, part II, regarding the geotechnical investigations - called SR EN1997-2/2008. However a previous standard already existed in Romania, by using the mathematical statistics in determination of the calculation values, the requirements of Eurocode can be taken into consideration. The setting of characteristics and calculations values of the geotechnical parameters was finally issued in Romania at the end of 2010 at standard NP122-2010 - “Norm regarding determination of the characteristic and calculation values of the geotechnical parameters”. This standard allows using of data already known from analysed area and setting the calculation values of geotechnical parameters. However, this possibility exist, it is not performed easy in Romania, considering that there isn’t any centralized system of information coming from the geotechnical studies performed for various objectives of private or national interests. Every company performing geotechnical studies tries to organize its own data base, but unfortunately none of them use existing centralized data. When determining the values of calculation, an important role is played by the variation ratio of the characteristic values of a geotechnical parameter. There are recommendations in the mentioned Norm, that could be taken into account, regarding the limits of the variation ratio, but these values are mentioned for Quaternary age soils only, normally consolidated, with a content of organic material < 5%. All of the difficult soils are excluded from the Norm even if they exist and affect the construction foundations on more than a half of the Romania’s surface. A type of difficult soil, extremely widespread on the Romania’s territory, is the contractile soil (with high swelling and contractions, very sensitive to the seasonal moisture variations). This type of material covers and influences the construction foundations in one over third of Romania’s territory. This work is proposing to be a step in determination of limits of the variation ratios for the contractile soils category, for the most used geotechnical parameters in the Romanian engineering practice, namely: the index of consistency and the cohesion.

  20. Diffusion kurtosis imaging of the liver at 3 Tesla: in vivo comparison to standard diffusion-weighted imaging.

    PubMed

    Budjan, Johannes; Sauter, Elke A; Zoellner, Frank G; Lemke, Andreas; Wambsganss, Jens; Schoenberg, Stefan O; Attenberger, Ulrike I

    2018-01-01

    Background Functional techniques like diffusion-weighted imaging (DWI) are gaining more and more importance in liver magnetic resonance imaging (MRI). Diffusion kurtosis imaging (DKI) is an advanced technique that might help to overcome current limitations of DWI. Purpose To evaluate DKI for the differentiation of hepatic lesions in comparison to conventional DWI at 3 Tesla. Material and Methods Fifty-six consecutive patients were examined using a routine abdominal MR protocol at 3 Tesla which included DWI with b-values of 50, 400, 800, and 1000 s/mm 2 . Apparent diffusion coefficient maps were calculated applying a standard mono-exponential fit, while a non-Gaussian kurtosis fit was used to obtain DKI maps. ADC as well as Kurtosis-corrected diffusion ( D) values were quantified by region of interest analysis and compared between lesions. Results Sixty-eight hepatic lesions (hepatocellular carcinoma [HCC] [n = 25]; hepatic adenoma [n = 4], cysts [n = 18]; hepatic hemangioma [HH] [n = 18]; and focal nodular hyperplasia [n = 3]) were identified. Differentiation of malignant and benign lesions was possible based on both DWI ADC as well as DKI D-values ( P values were in the range of 0.04 to < 0.0001). Conclusion In vivo abdominal DKI calculated using standard b-values is feasible and enables quantitative differentiation between malignant and benign liver lesions. Assessment of conventional ADC values leads to similar results when using b-values below 1000 s/mm 2 for DKI calculation.

  1. SU-G-TeP3-01: A New Approach for Calculating Variable Relative Biological Effectiveness in IMPT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, W; Randeniya, K; Grosshans, D

    2016-06-15

    Purpose: To investigate the impact of a new approach for calculating relative biological effectiveness (RBE) in intensity-modulated proton therapy (IMPT) optimization on RBE-weighted dose distributions. This approach includes the nonlinear RBE for the high linear energy transfer (LET) region, which was revealed by recent experiments at our institution. In addition, this approach utilizes RBE data as a function of LET without using dose-averaged LET in calculating RBE values. Methods: We used a two-piece function for calculating RBE from LET. Within the Bragg peak, RBE is linearly correlated to LET. Beyond the Bragg peak, we use a nonlinear (quadratic) RBE functionmore » of LET based on our experimental. The IMPT optimization was devised to incorporate variable RBE by maximizing biological effect (based on the Linear Quadratic model) in tumor and minimizing biological effect in normal tissues. Three glioblastoma patients were retrospectively selected from our institution in this study. For each patient, three optimized IMPT plans were created based on three RBE resolutions, i.e., fixed RBE of 1.1 (RBE-1.1), variable RBE based on linear RBE and LET relationship (RBE-L), and variable RBE based on linear and quadratic relationship (RBE-LQ). The RBE weighted dose distributions of each optimized plan were evaluated in terms of different RBE values, i.e., RBE-1.1, RBE-L and RBE-LQ. Results: The RBE weighted doses recalculated from RBE-1.1 based optimized plans demonstrated an increasing pattern from using RBE-1.1, RBE-L to RBE-LQ consistently for all three patients. The variable RBE (RBE-L and RBE-LQ) weighted dose distributions recalculated from RBE-L and RBE-LQ based optimization were more homogenous within the targets and better spared in the critical structures than the ones recalculated from RBE-1.1 based optimization. Conclusion: We implemented a new approach for RBE calculation and optimization and demonstrated potential benefits of improving tumor coverage and normal sparing in IMPT planning.« less

  2. Effect of wave function on the proton induced L XRP cross sections for 62Sm and 74W

    NASA Astrophysics Data System (ADS)

    Shehla, Kaur, Rajnish; Kumar, Anil; Puri, Sanjiv

    2015-08-01

    The Lk(k= 1, α, β, γ) X-ray production cross sections have been calculated for 74W and 62Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared with the measured cross sections reported in the recent compilation to check the reliability of the calculated values.

  3. Rindsel: an R package for phenotypic and molecular selection indices used in plant breeding.

    PubMed

    Perez-Elizalde, Sergío; Cerón-Rojas, Jesús J; Crossa, José; Fleury, Delphine; Alvarado, Gregorio

    2014-01-01

    Selection indices are estimates of the net genetic merit of the individual candidates for selection and are calculated based on phenotyping and molecular marker information collected on plants under selection in a breeding program. They reflect the breeding value of the plants and help breeders to choose the best ones for next generation. Rindsel is an R package that calculates phenotypic and molecular selection indices.

  4. Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.

    PubMed

    Cave, N J; Koo, S T

    2015-01-01

    Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  5. Diagnosis of cervical cells based on fractal and Euclidian geometrical measurements: Intrinsic Geometric Cellular Organization

    PubMed Central

    2014-01-01

    Background Fractal geometry has been the basis for the development of a diagnosis of preneoplastic and neoplastic cells that clears up the undetermination of the atypical squamous cells of undetermined significance (ASCUS). Methods Pictures of 40 cervix cytology samples diagnosed with conventional parameters were taken. A blind study was developed in which the clinic diagnosis of 10 normal cells, 10 ASCUS, 10 L-SIL and 10 H-SIL was masked. Cellular nucleus and cytoplasm were evaluated in the generalized Box-Counting space, calculating the fractal dimension and number of spaces occupied by the frontier of each object. Further, number of pixels occupied by surface of each object was calculated. Later, the mathematical features of the measures were studied to establish differences or equalities useful for diagnostic application. Finally, the sensibility, specificity, negative likelihood ratio and diagnostic concordance with Kappa coefficient were calculated. Results Simultaneous measures of the nuclear surface and the subtraction between the boundaries of cytoplasm and nucleus, lead to differentiate normality, L-SIL and H-SIL. Normality shows values less than or equal to 735 in nucleus surface and values greater or equal to 161 in cytoplasm-nucleus subtraction. L-SIL cells exhibit a nucleus surface with values greater than or equal to 972 and a subtraction between nucleus-cytoplasm higher to 130. L-SIL cells show cytoplasm-nucleus values less than 120. The rank between 120–130 in cytoplasm-nucleus subtraction corresponds to evolution between L-SIL and H-SIL. Sensibility and specificity values were 100%, the negative likelihood ratio was zero and Kappa coefficient was equal to 1. Conclusions A new diagnostic methodology of clinic applicability was developed based on fractal and euclidean geometry, which is useful for evaluation of cervix cytology. PMID:24742118

  6. Relative effect potency estimates of dioxin-like activity for dioxins, furans, and dioxin-like PCBs in adults based on two thyroid outcomes.

    PubMed

    Trnovec, Tomáš; Jusko, Todd A; Šovčíková, Eva; Lancz, Kinga; Chovancová, Jana; Patayová, Henrieta; Palkovičová, L'ubica; Drobná, Beata; Langer, Pavel; Van den Berg, Martin; Dedik, Ladislav; Wimmerová, Soňa

    2013-08-01

    Toxic equivalency factors (TEFs) are an important component in the risk assessment of dioxin-like human exposures. At present, this concept is based mainly on in vivo animal experiments using oral dosage. Consequently, the current human TEFs derived from mammalian experiments are applicable only for exposure situations in which oral ingestion occurs. Nevertheless, these "intake" TEFs are commonly-but incorrectly-used by regulatory authorities to calculate "systemic" toxic equivalents (TEQs) based on human blood and tissue concentrations, which are used as biomarkers for either exposure or effect. We sought to determine relative effect potencies (REPs) for systemic human concentrations of dioxin-like mixture components using thyroid volume or serum free thyroxine (FT4) concentration as the outcomes of interest. We used a benchmark concentration and a regression-based approach to compare the strength of association between each dioxin-like compound and the thyroid end points in 320 adults residing in an organochlorine-polluted area of eastern Slovakia. REPs calculated from thyroid volume and FT4 were similar. The regression coefficient (β)-derived REP data from thyroid volume and FT4 level were correlated with the World Health Organization (WHO) TEF values (Spearman r = 0.69, p = 0.01 and r = 0.62, p = 0.03, respectively). The calculated REPs were mostly within the minimum and maximum values for in vivo REPs derived by other investigators. Our REPs calculated from thyroid end points realistically reflect human exposure scenarios because they are based on chronic, low-dose human exposures and on biomarkers reflecting body burden. Compared with previous results, our REPs suggest higher sensitivity to the effects of dioxin-like compounds.

  7. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  8. Length dependence of electron transport through molecular wires--a first principles perspective.

    PubMed

    Khoo, Khoong Hong; Chen, Yifeng; Li, Suchun; Quek, Su Ying

    2015-01-07

    One-dimensional wires constitute a fundamental building block in nanoscale electronics. However, truly one-dimensional metallic wires do not exist due to Peierls distortion. Molecular wires come close to being stable one-dimensional wires, but are typically semiconductors, with charge transport occurring via tunneling or thermally-activated hopping. In this review, we discuss electron transport through molecular wires, from a theoretical, quantum mechanical perspective based on first principles. We focus specifically on the off-resonant tunneling regime, applicable to shorter molecular wires (<∼4-5 nm) where quantum mechanics dictates electron transport. Here, conductance decays exponentially with the wire length, with an exponential decay constant, beta, that is independent of temperature. Different levels of first principles theory are discussed, starting with the computational workhorse - density functional theory (DFT), and moving on to many-electron GW methods as well as GW-inspired DFT + Sigma calculations. These different levels of theory are applied in two major computational frameworks - complex band structure (CBS) calculations to estimate the tunneling decay constant, beta, and Landauer-Buttiker transport calculations that consider explicitly the effects of contact geometry, and compute the transmission spectra directly. In general, for the same level of theory, the Landauer-Buttiker calculations give more quantitative values of beta than the CBS calculations. However, the CBS calculations have a long history and are particularly useful for quick estimates of beta. Comparing different levels of theory, it is clear that GW and DFT + Sigma calculations give significantly improved agreement with experiment compared to DFT, especially for the conductance values. Quantitative agreement can also be obtained for the Seebeck coefficient - another independent probe of electron transport. This excellent agreement provides confirmative evidence of off-resonant tunneling in the systems under investigation. Calculations show that the tunneling decay constant beta is a robust quantity that does not depend on details of the contact geometry, provided that the same contact geometry is used for all molecular lengths considered. However, because conductance is sensitive to contact geometry, values of beta obtained by considering conductance values where the contact geometry is changing with the molecular junction length can be quite different. Experimentally measured values of beta in general compare well with beta obtained using DFT + Sigma and GW transport calculations, while discrepancies can be attributed to changes in the experimental contact geometries with molecular length. This review also summarizes experimental and theoretical efforts towards finding perfect molecular wires with high conductance and small beta values.

  9. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  10. A 3PG-based Model to Simulate Delta-13C Content in Three Tree Species in The Mica Creek Experiment Watershed, Idaho

    NASA Astrophysics Data System (ADS)

    Wei, L.; Marshall, J. D.

    2007-12-01

    3PG (Physiological Principles in Predicting Growth), a process-based physiological model of forest productivity, has been widely used and well validated. Based on 3PG, a 3PG-δ13C model to simulate δ13C content in plant tissue is built in this research. 3PG calculates carbon assimilation from utilizable absorbed photosynthetically active radiation (PAR), and calculates stomatal conductance from maximum canopy conductance multiplied by physiological modifier which includes the effect of water vapor deficit and soil water. Then the equation of Farquhar and Sharkey (1982) was used to calculate δ13C content in plant. Five even-aged coniferous forest stands located near Clarkia, Idaho (47°15'N, 115°25'W) in Mica Creek Experimental Watershed, were chosen to test the model, (2 stands had been partial cut (50% canopy removal in 1990) and 3 were uncut). MCEW has been extensively investigated since 1990 and many necessary parameters needed for 3PG are readily available. Each of these sites is located near a UI Meteorological station, which recorded half-hourly climatic data since 2003. These site-specific climatic data were extend to 1991 by correlating with data from a nearby SNOTEL station (SNOwpack TELemetry, NRCS, 47°9' N, 116°16' W). Forest mensuration data were obtained form each stand using variable radius plots (VRP). Three tree species, which consist more than 95% of all trees, were parameterized for 3PG model, including: grand fir (Abies grandis Donn ex D. Don), western red cedar (Thuja plicat Donn ex D. Don a) and Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco). Because 4 out of 5 stands have mixed species, we also used parameters for mixed stands to run the model. To stabilize, the model was initially run under average climatic data for 20 years, and then run under the actual climatic data from 1991 to 2006. As 3PG runs in a monthly time step, monthly δ13C values were calculated first, and then yearly values were calculated by weighted averages. For testing the model, tree cores were collected from each stand and species. Ring-widths of tree cores were measured and cross-dated with a ring-width chronology obtained from MCEW. δ13C contents of tree- ring samples from known year were tested. Preliminary results indicate 3PG-δ13C simulated values are consistent with observed values in tree-rings. δ13C values of modeled species are different: western red cider has the highest delta13C values among the three species and western larch has the lowest.

  11. The experimental and calculated characteristics of 22 tapered wings

    NASA Technical Reports Server (NTRS)

    Anderson, Raymond F

    1938-01-01

    The experimental and calculated aerodynamic characteristics of 22 tapered wings are compared, using tests made in the variable-density wind tunnel. The wings had aspect ratios from 6 to 12 and taper ratios from 1:6:1 and 5:1. The compared characteristics are the pitching moment, the aerodynamic-center position, the lift-curve slope, the maximum lift coefficient, and the curves of drag. The method of obtaining the calculated values is based on the use of wing theory and experimentally determined airfoil section data. In general, the experimental and calculated characteristics are in sufficiently good agreement that the method may be applied to many problems of airplane design.

  12. Minimization of Defective Products in The Department of Press Bridge & Rib Through Six Sigma DMAIC Phases

    NASA Astrophysics Data System (ADS)

    Rochman, YA; Agustin, A.

    2017-06-01

    This study proposes the DMAIC Six Sigma approach of Define, Measure, Analyze, Improve/Implement and Control (DMAIC) to minimizing the number of defective products in the bridge & rib department. There are 5 types of defects were the most dominant are broken rib, broken sound board, strained rib, rib sliding and sound board minori. The imperative objective is to improve the quality through the DMAIC phases. In the define phase, the critical to quality (CTQ) parameters was identified minimization of product defects through the pareto chart and FMEA. In this phase, to identify waste based on the current value stream mapping. In the measure phase, the specified control limits product used to maintain the variations of the product, the calculation of the value of DPMO (Defect Per Million Opportunities) and the calculation of the value of sigma level. In analyze phase, determine the type of defect of the most dominant and identify the causes of defective products. In the improve phase, the existing design was modified through various alternative solutions by conducting brainstorming sessions. In this phase, the solution was identified based on the results of FMEA. Improvements were made to the seven priority causes of disability based on the highest RPN value. In the control phase, focusing on improvements to be made. Proposed improvements include making and define standard operating procedures, improving the quality and eliminate waste defective products.

  13. Estimation and application of indicator values for common macroinvertebrate genera and families of the United States

    USGS Publications Warehouse

    Carlisle, D.M.; Meador, M.R.; Moulton, S.R.; Ruhl, P.M.

    2007-01-01

    Tolerance of macroinvertebrate taxa to chemical and physical stressors is widely used in the analysis and interpretation of bioassessment data, but many estimates lack empirical bases. Our main objective was to estimate genus- and family-level indicator values (IVs) from a data set of macroinvertebrate communities, chemical, and physical stressors collected in a consistent manner throughout the United States. We then demonstrated an application of these IVs to detect alterations in benthic macroinvertebrate assemblages along gradients of urbanization in New England and Alabama. Principal components analysis (PCA) was used to create synthetic gradients of chemical stressors, for which genus- and family-level weighted averages (WAs) were calculated. Based on results of PCA, WAs were calculated for three synthetic gradients (ionic concentration, nutrient concentration, and dissolved oxygen/water temperature) and two uncorrelated physical variables (suspended sediment concentration and percent fines). Indicator values for each stress gradient were subsequently created by transforming WAs into ten ordinal ranks based on percentiles of values across all taxa. Mean IVs of genera and families were highly correlated to road density in Alabama and New England, and supported the conclusions of independent assessments of the chemical and physical stressors acting in each geographic area. Family IVs were nearly as responsive to urbanization as genus IVs. The limitations of widespread use of these IVs are discussed.

  14. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  15. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    NASA Astrophysics Data System (ADS)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  16. Crack propagation modelling for high strength steel welded structural details

    NASA Astrophysics Data System (ADS)

    Mecséri, B. J.; Kövesdi, B.

    2017-05-01

    Nowadays the barrier of applying HSS (High Strength Steel) material in bridge structures is their low fatigue strength related to yield strength. This paper focuses on the fatigue behaviour of a structural details (a gusset plate connection) made from NSS and HSS material, which is frequently used in bridges in Hungary. An experimental research program is carried out at the Budapest University of Technology and Economics to investigate the fatigue lifetime of this structural detail type through the same test specimens made from S235 and S420 steel grades. The main aim of the experimental research program is to study the differences in the crack propagation and the fatigue lifetime between normal and high strength steel structures. Based on the observed fatigue crack pattern the main direction and velocity of the crack propagation is determined. In parallel to the tests finite element model (FEM) are also developed, which model can handle the crack propagation. Using the measured strain data in the tests and the calculated values from the FE model, the approximation of the material parameters of the Paris law are calculated step-by-step, and their calculated values are evaluated. The same material properties are determined for NSS and also for HSS specimens as well, and the differences are discussed. In the current paper, the results of the experiments, the calculation method of the material parameters and the calculated values are introduced.

  17. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  18. Natal dispersal and genetic structure in a population of the European wild rabbit (Oryctolagus cuniculus).

    PubMed

    Webb, N J; Ibrahim, K M; Bell, D J; Hewitt, G M

    1995-04-01

    A combination of behavioural observation, DNA fingerprinting, and allozyme analysis were used to examine natal dispersal in a wild rabbit population. Rabbits lived in territorial, warren based social groups. Over a 6-year period, significantly more male than female rabbits moved to a new social group before the start of their first breeding season. This pattern of female philopatry and male dispersal was reflected in the genetic structure of the population. DNA fingerprint band-sharing coefficients were significantly higher for females within the same group than for females between groups, while this was not the case for males. Wright's inbreeding coefficients were calculated from fingerprint band-sharing values and compared to those obtained from allozyme data. There was little correlation between the relative magnitudes of the F-statistics calculated using the two techniques for comparisons between different social groups. In contrast, two alternative methods for calculating FST from DNA fingerprints gave reasonably concordant values although those based on band-sharing were consistently lower than those calculated by an 'allele' frequency approach. A negative FIS value was obtained from allozyme data. Such excess heterozygosity within social groups is expected even under random mating given the social structure and sex-biased dispersal but it is argued that the possibility of behavioural avoidance of inbreeding should not be discounted in this species. Estimates of genetic differentiation obtained from allozyme and DNA fingerprint data agreed closely with reported estimates for the yellow-bellied marmot, a species with a very similar social structure to the European rabbit.

  19. Calculation of exchange coupling constants in triply-bridged dinuclear Cu(II) compounds based on spin-flip constricted variational density functional theory.

    PubMed

    Seidu, Issaka; Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2012-03-08

    The performance of the second-order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) for the calculation of the exchange coupling constant (J) is assessed by application to a series of triply bridged Cu(II) dinuclear complexes. A comparison of the J values based on SF-CV(2)-DFT with those obtained by the broken symmetry (BS) DFT method and experiment is provided. It is demonstrated that our methodology constitutes a viable alternative to the BS-DFT method. The strong dependence of the calculated exchange coupling constants on the applied functionals is demonstrated. Both SF-CV(2)-DFT and BS-DFT affords the best agreement with experiment for hybrid functionals.

  20. The segmentation of Thangka damaged regions based on the local distinction

    NASA Astrophysics Data System (ADS)

    Xuehui, Bi; Huaming, Liu; Xiuyou, Wang; Weilan, Wang; Yashuai, Yang

    2017-01-01

    Damaged regions must be segmented before digital repairing Thangka cultural relics. A new segmentation algorithm based on local distinction is proposed for segmenting damaged regions, taking into account some of the damaged area with a transition zone feature, as well as the difference between the damaged regions and their surrounding regions, combining local gray value, local complexity and local definition-complexity (LDC). Firstly, calculate the local complexity and normalized; secondly, calculate the local definition-complexity and normalized; thirdly, calculate the local distinction; finally, set the threshold to segment local distinction image, remove the over segmentation, and get the final segmentation result. The experimental results show that our algorithm is effective, and it can segment the damaged frescoes and natural image etc.

  1. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    PubMed

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  2. 40 CFR 600.002-93 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) The average number of miles traveled by an automobile or group of automobiles per volume of fuel consumed as computed in § 600.113 or § 600.207; or (ii) The equivalent petroleum-based fuel economy for an... vehicles, the term means the equivalent petroleum-based fuel economy value as determined by the calculation...

  3. 7 CFR 760.909 - Payment calculation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on 26 percent of the average fair market value of the livestock. (c) The 2005-2007 LIP national payment rate for eligible livestock contract growers is based on 26 percent of the average income loss... this part); (2) For the loss of income from the dead livestock from the party who contracted with the...

  4. 7 CFR 760.909 - Payment calculation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on 26 percent of the average fair market value of the livestock. (c) The 2005-2007 LIP national payment rate for eligible livestock contract growers is based on 26 percent of the average income loss... this part); (2) For the loss of income from the dead livestock from the party who contracted with the...

  5. Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.

    2015-01-01

    Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…

  6. Regression analysis for solving diagnosis problem of children's health

    NASA Astrophysics Data System (ADS)

    Cherkashina, Yu A.; Gerget, O. M.

    2016-04-01

    The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.

  7. Prediction of Ras-effector interactions using position energy matrices.

    PubMed

    Kiel, Christina; Serrano, Luis

    2007-09-01

    One of the more challenging problems in biology is to determine the cellular protein interaction network. Progress has been made to predict protein-protein interactions based on structural information, assuming that structural similar proteins interact in a similar way. In a previous publication, we have determined a genome-wide Ras-effector interaction network based on homology models, with a high accuracy of predicting binding and non-binding domains. However, for a prediction on a genome-wide scale, homology modelling is a time-consuming process. Therefore, we here successfully developed a faster method using position energy matrices, where based on different Ras-effector X-ray template structures, all amino acids in the effector binding domain are sequentially mutated to all other amino acid residues and the effect on binding energy is calculated. Those pre-calculated matrices can then be used to score for binding any Ras or effector sequences. Based on position energy matrices, the sequences of putative Ras-binding domains can be scanned quickly to calculate an energy sum value. By calibrating energy sum values using quantitative experimental binding data, thresholds can be defined and thus non-binding domains can be excluded quickly. Sequences which have energy sum values above this threshold are considered to be potential binding domains, and could be further analysed using homology modelling. This prediction method could be applied to other protein families sharing conserved interaction types, in order to determine in a fast way large scale cellular protein interaction networks. Thus, it could have an important impact on future in silico structural genomics approaches, in particular with regard to increasing structural proteomics efforts, aiming to determine all possible domain folds and interaction types. All matrices are deposited in the ADAN database (http://adan-embl.ibmc.umh.es/). Supplementary data are available at Bioinformatics online.

  8. Retooling Predictive Relations for non-volatile PM by Comparison to Measurements

    NASA Astrophysics Data System (ADS)

    Vander Wal, R. L.; Abrahamson, J. P.

    2015-12-01

    Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.

  9. Using Zipf-Mandelbrot law and graph theory to evaluate animal welfare

    NASA Astrophysics Data System (ADS)

    de Oliveira, Caprice G. L.; Miranda, José G. V.; Japyassú, Hilton F.; El-Hani, Charbel N.

    2018-02-01

    This work deals with the construction and testing of metrics of welfare based on behavioral complexity, using assumptions derived from Zipf-Mandelbrot law and graph theory. To test these metrics we compared yellow-breasted capuchins (Sapajus xanthosternos) (Wied-Neuwied, 1826) (PRIMATES CEBIDAE) found in two institutions, subjected to different captive conditions: a Zoobotanical Garden (hereafter, ZOO; n = 14), in good welfare condition, and a Wildlife Rescue Center (hereafter, WRC; n = 8), in poor welfare condition. In the Zipf-Mandelbrot-based analysis, the power law exponent was calculated using behavior frequency values versus behavior rank value. These values allow us to evaluate variations in individual behavioral complexity. For each individual we also constructed a graph using the sequence of behavioral units displayed in each recording (average recording time per individual: 4 h 26 min in the ZOO, 4 h 30 min in the WRC). Then, we calculated the values of the main graph attributes, which allowed us to analyze the complexity of the connectivity of the behaviors displayed in the individuals' behavioral sequences. We found significant differences between the two groups for the slope values in the Zipf-Mandelbrot analysis. The slope values for the ZOO individuals approached -1, with graphs representing a power law, while the values for the WRC individuals diverged from -1, differing from a power law pattern. Likewise, we found significant differences for the graph attributes average degree, weighted average degree, and clustering coefficient when comparing the ZOO and WRC individual graphs. However, no significant difference was found for the attributes modularity and average path length. Both analyses were effective in detecting differences between the patterns of behavioral complexity in the two groups. The slope values for the ZOO individuals indicated a higher behavioral complexity when compared to the WRC individuals. Similarly, graph construction and the calculation of its attributes values allowed us to show that the complexity of the connectivity among the behaviors was higher in the ZOO than in the WRC individual graphs. These results show that the two measuring approaches introduced and tested in this paper were capable of capturing the differences in welfare levels between the two conditions, as shown by differences in behavioral complexity.

  10. Theoretical Evaluation of Crosslink Density of Chain Extended Polyurethane Networks Based on Hydroxyl Terminated Polybutadiene and Butanediol and Comparison with Experimental Data

    NASA Astrophysics Data System (ADS)

    Sekkar, Venkataraman; Alex, Ancy Smitha; Kumar, Vijendra; Bandyopadhyay, G. G.

    2018-01-01

    Polyurethane networks between hydroxyl terminated polybutadiene (HTPB) and butanediol (BD) were prepared using toluene diisocyanate (TDI) as the curative. HTPB and BD were taken at equivalent ratios viz.: 1:0, 1:1, 1:2, 1:4, and 1:8. Crosslink density (CLD) was theoretically calculated using α-model equations developed by Marsh. CLD for the polyurethane networks was experimentally evaluated from equilibrium swell and stress-strain data. Young's modulus and Mooney-Rivlin approaches were adopted to calculate CLD from stress-strain data. Experimentally obtained CLD values were enormously higher than theoretical values especially at higher BD/HTPB equivalent ratios. The difference in the theoretical and experimental values for CLD was explained in terms of local crystallization due to the formation of hard segments and hydrogen bonded interactions.

  11. A web site for calculating the degree of chirality.

    PubMed

    Zayit, Amir; Pinsky, Mark; Elgavi, Hadassah; Dryzun, Chaim; Avnir, David

    2011-01-01

    The web site, http://www.csm.huji.ac.il/, uses the Continuous Chirality Measure to evaluate quantitatively the degree of chirality of a molecule, a structure, a fragment. The value of this measure ranges from zero, the molecule is achiral, to higher values (the upper limit is 100); the higher the chirality value, the more chiral the molecule is. The measure is based on the distance between the chiral molecule and the nearest structure that is achiral. Questions such as the following can be addressed: by how much is one molecule more chiral than the other? how does chirality change along conformational motions? is there a correlation between chirality and enantioselectivity in a series of molecules? Both elementary and advanced features are offered. Related calculation options are the symmetry measures and shape measures. Copyright © 2009 Wiley-Liss, Inc.

  12. 7 CFR 760.811 - Rates and yields; calculating payments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...

  13. 7 CFR 760.811 - Rates and yields; calculating payments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...

  14. 7 CFR 760.811 - Rates and yields; calculating payments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...

  15. 7 CFR 760.811 - Rates and yields; calculating payments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...

  16. 7 CFR 760.811 - Rates and yields; calculating payments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...

  17. Odontological light-emitting diode light-curing unit beam quality.

    PubMed

    de Magalhães Filho, Thales Ribeiro; Weig, Karin de Mello; Werneck, Marcelo Martins; da Costa Neto, Célio Albano; da Costa, Marysilvia Ferreira

    2015-05-01

    The distribution of light intensity of three light-curing units (LCUs) to cure the resin-based composite for dental fillings was analyzed, and a homogeneity index [flat-top factor (FTF)] was calculated. The index is based on the M2 index, which is used for laser beams. An optical spectrum analyzer was used with an optical fiber to produce an x-y power profile of each LCU light guide. The FTF-calculated values were 0.51 for LCU1 and 0.55 for LCU2, which was the best FTF, although it still differed greatly from the perfect FTF = 1, and 0.27 for LCU3, which was the poorest value and even lower than the Gaussian FTF = 0.5. All LCUs presented notably heterogeneous light distribution, which can lead professionals and researchers to produce samples with irregular polymerization and poor mechanical properties.

  18. Odontological light-emitting diode light-curing unit beam quality

    NASA Astrophysics Data System (ADS)

    de Magalhães Filho, Thales Ribeiro; Weig, Karin de Mello; Werneck, Marcelo Martins; da Costa Neto, Célio Albano; da Costa, Marysilvia Ferreira

    2015-05-01

    The distribution of light intensity of three light-curing units (LCUs) to cure the resin-based composite for dental fillings was analyzed, and a homogeneity index [flat-top factor (FTF)] was calculated. The index is based on the M2 index, which is used for laser beams. An optical spectrum analyzer was used with an optical fiber to produce an x-y power profile of each LCU light guide. The FTF-calculated values were 0.51 for LCU1 and 0.55 for LCU2, which was the best FTF, although it still differed greatly from the perfect FTF=1, and 0.27 for LCU3, which was the poorest value and even lower than the Gaussian FTF=0.5. All LCUs presented notably heterogeneous light distribution, which can lead professionals and researchers to produce samples with irregular polymerization and poor mechanical properties.

  19. Tomography for two-dimensional gas temperature distribution based on TDLAS

    NASA Astrophysics Data System (ADS)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  20. Development of modern approach to absorbed dose assessment in radionuclide therapy, based on Monte Carlo method simulation of patient scintigraphy

    NASA Astrophysics Data System (ADS)

    Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya

    2017-01-01

    One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.

  1. Effects of Solid Solution Strengthening Elements Mo, Re, Ru, and W on Transition Temperatures in Nickel-Based Superalloys with High γ'-Volume Fraction: Comparison of Experiment and CALPHAD Calculations

    NASA Astrophysics Data System (ADS)

    Ritter, Nils C.; Sowa, Roman; Schauer, Jan C.; Gruber, Daniel; Goehler, Thomas; Rettig, Ralf; Povoden-Karadeniz, Erwin; Koerner, Carolin; Singer, Robert F.

    2018-06-01

    We prepared 41 different superalloy compositions by an arc melting, casting, and heat treatment process. Alloy solid solution strengthening elements were added in graded amounts, and we measured the solidus, liquidus, and γ'-solvus temperatures of the samples by DSC. The γ'-phase fraction increased as the W, Mo, and Re contents were increased, and W showed the most pronounced effect. Ru decreased the γ'-phase fraction. Melting temperatures (i.e., solidus and liquidus) were increased by addition of Re, W, and Ru (the effect increased in that order). Addition of Mo decreased the melting temperature. W was effective as a strengthening element because it acted as a solid solution strengthener and increased the fraction of fine γ'-precipitates, thus improving precipitation strengthening. Experimentally determined values were compared with calculated values based on the CALPHAD software tools Thermo-Calc (databases: TTNI8 and TCNI6) and MatCalc (database ME-NI). The ME-NI database, which was specially adapted to the present investigation, showed good agreement. TTNI8 also showed good results. The TCNI6 database is suitable for computational design of complex nickel-based superalloys. However, a large deviation remained between the experiment results and calculations based on this database. It also erroneously predicted γ'-phase separations and failed to describe the Ru-effect on transition temperatures.

  2. Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution

    NASA Astrophysics Data System (ADS)

    Plekhov, O. A.; Kostina, A. A.

    2017-05-01

    The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.

  3. Predictive isotopic biogeochemistry: hydrocarbons from anoxic marine basins

    NASA Technical Reports Server (NTRS)

    Freeman, K. H.; Wakeham, S. G.; Hayes, J. M.

    1994-01-01

    Carbon isotopic compositions were determined for individual hydrocarbons in water column and sediment samples from the Cariaco Trench and Black Sea. In order to identify hydrocarbons derived from phytoplankton, the isotopic compositions expected for biomass of autotrophic organisms living in surface waters of both localities were calculated based on the concentrations of CO2(aq) and the isotopic compositions of dissolved inorganic carbon. These calculated values are compared to measured delta values for particulate organic carbon and for individual hydrocarbon compounds. Specifically, we find that lycopane is probably derived from phytoplankton and that diploptene is derived from the lipids of chemoautotrophs living above the oxic/anoxic boundary. Three acyclic isoprenoids that have been considered markers for methanogens, pentamethyleicosane and two hydrogenated squalenes, have different delta values and apparently do not derive from a common source. Based on the concentration profiles and isotopic compositions, the C31 and C33 n-alkanes and n-alkenes have a similar source, and both may have a planktonic origin. If so, previously assigned terrestrial origins of organic matter in some Black Sea sediments may be erroneous.

  4. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  5. Future contingencies and photovoltaic system worth

    NASA Astrophysics Data System (ADS)

    Jones, G. J.; Thomas, M. G.; Bonk, G. J.

    1982-09-01

    The value of dispersed photovoltaic systems connected to the utility grid was calculated using the optimized generation planning program. The 1986 to 2001 time period was used for this study. Photovoltaic systems were dynamically integrated, up to 5% total capacity, into 9 NERC based regions under a range of future fuel and economic contingencies. Value was determined by the change in revenue requirements due to the photovoltaic additions. Displacement of high cost fuel was paramount to value, while capacity displacement was highly variable and dependent upon regional fuel mix.

  6. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications

    DTIC Science & Technology

    2001-05-15

    is based on a calculated test statistic value, which is a function of the data. If the test statistic value is S and the critical value is t, then...5 Defined in The Handbook of Applied Cryptography ; A. Menezes, P. Van Oorschot and S . Vanstone; CRC Press, 1997. The first 4...3rd ed. Reading: Addison-Wesley, Inc., pp. 61-80. [4] A. J. Menezes, P. C. van Oorschot, and S . A. Vanstone (1997), Handbook of Applied Cryptography

  7. Calculating the return on investment of mobile healthcare.

    PubMed

    Oriol, Nancy E; Cote, Paul J; Vavasis, Anthony P; Bennet, Jennifer; Delorenzo, Darien; Blanc, Philip; Kohane, Isaac

    2009-06-02

    Mobile health clinics provide an alternative portal into the healthcare system for the medically disenfranchised, that is, people who are underinsured, uninsured or who are otherwise outside of mainstream healthcare due to issues of trust, language, immigration status or simply location. Mobile health clinics as providers of last resort are an essential component of the healthcare safety net providing prevention, screening, and appropriate triage into mainstream services. Despite the face value of providing services to underserved populations, a focused analysis of the relative value of the mobile health clinic model has not been elucidated. The question that the return on investment algorithm has been designed to answer is: can the value of the services provided by mobile health programs be quantified in terms of quality adjusted life years saved and estimated emergency department expenditures avoided? Using a sample mobile health clinic and published research that quantifies health outcomes, we developed and tested an algorithm to calculate the return on investment of a typical broad-service mobile health clinic: the relative value of mobile health clinic services = annual projected emergency department costs avoided + value of potential life years saved from the services provided. Return on investment ratio = the relative value of the mobile health clinic services/annual cost to run the mobile health clinic. Based on service data provided by The Family Van for 2008 we calculated the annual cost savings from preventing emergency room visits, $3,125,668 plus the relative value of providing 7 of the top 25 priority prevention services during the same period, US$17,780,000 for a total annual value of $20,339,968. Given that the annual cost to run the program was $567,700, the calculated return on investment of The Family Van was 36:1. By using published data that quantify the value of prevention practices and the value of preventing unnecessary use of emergency departments, an empirical method was developed to determine the value of a typical mobile health clinic. The Family Van, a mobile health clinic that has been serving the medically disenfranchised of Boston for 16 years, was evaluated accordingly and found to have return on investment of $36 for every $1 invested in the program.

  8. A program for calculation of intrapulmonary shunts, blood-gas and acid-base values with a programmable calculator.

    PubMed

    Ruiz, B C; Tucker, W K; Kirby, R R

    1975-01-01

    With a desk-top, programmable calculator, it is now possible to do complex, previously time-consuming computations in the blood-gas laboratory. The authors have developed a program with the necessary algorithms for temperature correction of blood gases and calculation of acid-base variables and intrapulmonary shunt. It was necessary to develop formulas for the Po2 temperature-correction coefficient, the oxyhemoglobin-dissociation curve for adults (withe necessary adjustments for fetal blood), and changes in water vapor pressure due to variation in body temperature. Using this program in conjuction with a Monroe 1860-21 statistical programmable calculator, it is possible to temperature-correct pH,Pco2, and Po2. The machine will compute alveolar-arterial oxygen tension gradient, oxygen saturation (So2), oxygen content (Co2), actual HCO minus 3 and a modified base excess. If arterial blood and mixed venous blood are obtained, the calculator will print out intrapulmonary shunt data (Qs/Qt) and arteriovenous oxygen differences (a minus vDo2). There also is a formula to compute P50 if pH,Pco2,Po2, and measured So2 from two samples of tonometered blood (one above 50 per cent and one below 50 per cent saturation) are put into the calculator.

  9. Handling the procurement of prostheses for total hip replacement: description of an original value based approach and application to a real-life dataset reported in the UK

    PubMed Central

    Messori, Andrea; Trippoli, Sabrina; Marinai, Claudio

    2017-01-01

    Objectives In most European countries, innovative medical devices are not managed according to cost–utility methods, the reason being that national agencies do not generally evaluate these products. The objective of our study was to investigate the cost-utility profile of prostheses for hip replacement and to calculate a value-based score to be used in the process of procurement and tendering for these devices. Methods The first phase of our study was aimed at retrieving the studies reporting the values of QALYs, direct cost, and net monetary benefit (NMB) from patients undergoing total hip arthroplasty (THA) with different brands of hip prosthesis. The second phase was aimed at calculating, on the basis of the results of cost–utility analysis, a tender score for each device (defined according to standard tendering equations and adapted to a 0–100 scale). This allowed us to determine the ranking of each device in the simulated tender. Results We identified a single study as the source of information for our analysis. Nine device brands (cemented, cementless, or hybrid) were evaluated. The cemented prosthesis Exeter V40/Elite Plus Ogee, the cementless device Taperloc/Exceed, and the hybrid device Exeter V40/Trident had the highest NMB (£152 877, £156 356, and £156 210, respectively) and the best value-based tender score. Conclusions The incorporation of value-based criteria in the procurement process can contribute to optimising the value for money for THA devices. According to the approach described herein, the acquisition of these devices does not necessarily converge on the product with the lowest cost; in fact, more costly devices should be preferred when their increased cost is offset by the monetary value of the increased clinical benefit. PMID:29259062

  10. Flavonoid Values for USDA Survey Foods and Beverages 2007-2008: Provisional Flavonoid Addendum, FNDDS 4.1 and Flavonoid Intake Data, WWEIA, NHANES 2007-2008

    USDA-ARS?s Scientific Manuscript database

    This release of the Flavonoid Values for Survey Foods and Beverages 2007-2008 makes possible, for the first time, calculation of flavonoid intakes based on all foods and beverages reported in national surveys. This release has two components. The first component is an addendum to USDA’s Food and N...

  11. Using TSP Data to Evaluate Your Project Performance

    DTIC Science & Technology

    2010-09-01

    EVA) [ Pressman 2005]. However, unlike earned value, the value is calculated based on the planned size of software components instead of the planned...Hopkins University Press, 1881. 38 | CMU/SEI-2010-TR-038 [ Pressman 2005] Pressman , Roger S. Software Engineering: A Practitioner’s Approach, R.S... Pressman and Asso- ciates, 2005. [Tuma 2010] Tuma Solutions LLC, 2010. http://www.processdash.com/ REPORT DOCUMENTATION PAGE Form Approved

  12. 25 CFR 39.207 - How does OIEP determine a school's funding for the school year?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 5. Add together the total WSUs for all Bureau-funded schools. (f) Step 6. Calculate the value of a... for the previous 3 years. (g) Step 7. Multiply each school's WSU total by the base value of one WSU to... 25 Indians 1 2010-04-01 2010-04-01 false How does OIEP determine a school's funding for the school...

  13. 25 CFR 39.207 - How does OIEP determine a school's funding for the school year?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 5. Add together the total WSUs for all Bureau-funded schools. (f) Step 6. Calculate the value of a... for the previous 3 years. (g) Step 7. Multiply each school's WSU total by the base value of one WSU to... 25 Indians 1 2011-04-01 2011-04-01 false How does OIEP determine a school's funding for the school...

  14. Heart rate variability analysis based on time-frequency representation and entropies in hypertrophic cardiomyopathy patients.

    PubMed

    Clariá, F; Vallverdú, M; Baranowski, R; Chojnowska, L; Caminal, P

    2008-03-01

    In hypertrophic cardiomyopathy (HCM) patients there is an increased risk of premature death, which can occur with little or no warning. Furthermore, classification for sudden cardiac death on patients with HCM is very difficult. The aim of our study was to improve the prognostic value of heart rate variability (HRV) in HCM patients, giving insight into changes of the autonomic nervous system. In this way, the suitability of linear and nonlinear measures was studied to assess the HRV. These measures were based on time-frequency representation (TFR) and on Shannon and Rényi entropies, and compared with traditional HRV measures. Holter recordings of 64 patients with HCM and 55 healthy subjects were analyzed. The HCM patients consisted of two groups: 13 high risk patients, after aborted sudden cardiac death (SCD); 51 low risk patients, without SCD. Five-hour RR signals, corresponding to the sleep period of the subjects, were considered for the analysis as a comparable standard situation. These RR signals were filtered in the three frequency bands: very low frequency band (VLF, 0-0.04 Hz), low frequency band (LF, 0.04-0.15 Hz) and high frequency band (HF, 0.15-0.45 Hz). TFR variables based on instantaneous frequency and energy functions were able to classify HCM patients and healthy subjects (control group). Results revealed that measures obtained from TFR analysis of the HRV better classified the groups of subjects than traditional HRV parameters. However, results showed that nonlinear measures improved group classification. It was observed that entropies calculated in the HF band showed the highest statistically significant levels comparing the HCM group and the control group, p-value < 0.0005. The values of entropy measures calculated in the HCM group presented lower values, indicating a decreasing of complexity, than those calculated from the control group. Moreover, similar behavior was observed comparing high and low risk of premature death, the values of the entropy being lower in high risk patients, p-value < 0.05, indicating an increase of predictability. Furthermore, measures from information entropy, but not from TFR, seem to be useful for enhanced risk stratification in HCM patients with an increased risk of sudden cardiac death.

  15. Comparison of simple additive weighting (SAW) and composite performance index (CPI) methods in employee remuneration determination

    NASA Astrophysics Data System (ADS)

    Karlitasari, L.; Suhartini, D.; Benny

    2017-01-01

    The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.

  16. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    NASA Astrophysics Data System (ADS)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  17. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  18. Modeling Future Fire danger over North America in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Jain, P.; Paimazumder, D.; Done, J.; Flannigan, M.

    2016-12-01

    Fire danger ratings are used to determine wildfire potential due to weather and climate factors. The Fire Weather Index (FWI), part of the Canadian Forest Fire Danger Rating System (CFFDRS), incorporates temperature, relative humidity, windspeed and precipitation to give a daily fire danger rating that is used by wildfire management agencies in an operational context. Studies using GCM output have shown that future wildfire danger will increase in a warming climate. However, these studies are somewhat limited by the coarse spatial resolution (typically 100-400km) and temporal resolution (typically 6-hourly to monthly) of the model output. Future wildfire potential over North America based on FWI is calculated using output from the Weather, Research and Forecasting (WRF) model, which is used to downscale future climate scenarios from the bias-corrected Community Climate System Model (CCSM) under RCP8.5 scenarios at a spatial resolution of 36km. We consider five eleven year time slices: 1990-2000, 2020-2030, 2030-2040, 2050-2060 and 2080-2090. The dynamically downscaled simulation improves determination of future extreme weather by improving both spatial and temporal resolution over most GCM models. To characterize extreme fire weather we calculate annual numbers of spread days (days for which FWI > 19) and annual 99th percentile of FWI. Additionally, an extreme value analysis based on the peaks-over-threshold method allows us to calculate the return values for extreme FWI values.

  19. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  20. Biologic variability of N-terminal pro-brain natriuretic peptide in healthy dogs and dogs with myxomatous mitral valve disease.

    PubMed

    Winter, Randolph L; Saunders, Ashley B; Gordon, Sonya G; Buch, Jesse S; Miller, Matthew W

    2017-04-01

    To determine the biologic variability of N-terminal pro-brain natriuretic peptide (NTproBNP) in healthy dogs and dogs with various stages of myxomatous mitral valve disease (MMVD). Thirty-eight privately owned dogs: 28 with MMVD and 10 healthy controls. Prospective clinical study with comprehensive evaluation used to group dogs as healthy or into three stages of MMVD based on current guidelines. NTproBNP was measured hourly, daily, and weekly. For each group, analytical (CV A ), within-subject (CV I ), and between-subject (CV G ) coefficients of variability were calculated in addition to percent critical change value (CCV) and index of individuality (IoI). For healthy dogs, calculated NTproBNP values were: CV A  = 4.2%; CV I  = 25.2%; CV G  = 49.3%; IoI = 0.52, and CCV = 70.8%. For dogs with MMVD, calculated NTproBNP values were: CV A  = 6.2%; CV I  = 20.0%; CV G  = 61.3%; IoI = 0.34, and CCV = 58.2%. Biologic variability affects NTproBNP concentrations in healthy dogs and dogs with MMVD. Monitoring serial individual changes in NTproBNP may be clinically relevant in addition to using population-based reference ranges to determine changes in disease status. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Daily intake and hazard index of parabens based upon 24 h urine samples of the German Environmental Specimen Bank from 1995 to 2012.

    PubMed

    Moos, Rebecca K; Apel, Petra; Schröter-Kermani, Christa; Kolossa-Gehring, Marike; Brüning, Thomas; Koch, Holger M

    2017-11-01

    In recent years, exposure to parabens has become more of a concern because of evidence of ubiquitous exposure in the general population, combined with evidence of their potency as endocrine disruptors. New human metabolism data from oral exposure experiments enable us to back calculate daily paraben intakes from urinary paraben levels. We report daily intakes (DIs) for six parabens based on 660 24 h urine samples from the German Environmental Specimen Bank collected between 1995 and 2012. Median DI values ranged between 1.1 μg/kg bw/day for iso-butyl paraben and 47.5 μg/kg bw/day for methyl paraben. The calculated DIs were compared with acceptable levels of exposure to evaluate the hazard quotients (HQs) that indicate that acceptable exposure is exceeded for values of >1. Approximately 5% of our study population exceeded this threshold for individual paraben exposure. The hazard index (HI) that takes into account the cumulative risk of adverse estrogenic effects was 1.3 at the 95th percentile and 4.4 at maximum intakes, mainly driven by n-propyl paraben exposure. HI values of >1 indicate some level of concern. However, we have to point out that we applied most conservative assumptions in the HQ/HI calculations. Also, major exposure reduction measures were enacted in the European Union after 2012.

  2. The Central Symmetry Analysis of Wrinkle Ridges in Lunar Mare Serenitatis

    NASA Astrophysics Data System (ADS)

    Yao, Meijuan; Chen, Jianping

    2018-03-01

    Wrinkle ridges are one of the most common structures usually found in lunar mare basalts, and their formations are closely related to the lunar mare. In this paper, wrinkle ridges in Mare Serenitatis were identified and mapped via high-resolution data acquired from SELENE, and a quantitative method was introduced to analyze the degree of central symmetry of the wrinkle ridges distributed in a concentric or radial pattern. Meanwhile, two methods were used to measure the lengths and orientations of wrinkle ridges before calculating their central symmetry value. Based on the mapped wrinkle ridges, we calculated the central symmetry value of the wrinkle ridges for the whole Mare Serenitatis as well as for the four circular ridge systems proposed by a previous study via this method. We also analyzed the factors that would cause discrepancies when calculating the central symmetry value. The results indicate that the method can be used to quantitatively analyze the degree of central symmetry of the linear features that were concentrically or radially oriented and can reflect the stress field characteristics.

  3. Applicability of ASHRAE clear-sky model based on solar-radiation measurements in Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Abouhashish, Mohamed

    2017-06-01

    The constants of the ASHRAE clear sky model predict high values of the hourly beam radiation and very low values of the hourly diffuse radiation when used for locations in Saudi Arabia. Eight measurement stations in different locations are used to obtain new clearness factors for the model. The procedure depends on the comparison of monthly direct normal radiation (DNI) and diffuse horizontal radiation (DHI) between the measurement and the calculated values. Two factors are obtained CNb, CNd for every month to adjust the calculated clear sky radiation in order to consider the effects of local weather conditions. A simple and practical simulation model for solar geometry is designed using Microsoft Visual Basic platform, the model simulates the solar angles and radiation components according to ASHRAE model. The comparison of the calculated data with the first year of measurements indicate that the attenuation of site clearness is variable across the locations and from month to month, showing the clearest skies in the north and northwestern parts of the Kingdom especially during summer months.

  4. Theoretical study on the spectroscopic properties of CO3(*-).nH2O clusters: extrapolation to bulk.

    PubMed

    Pathak, Arup K; Mukherjee, Tulsi; Maity, Dilip K

    2008-10-24

    Vertical detachment energies (VDE) and UV/Vis absorption spectra of hydrated carbonate radical anion clusters, CO(3)(*-).nH(2)O (n=1-8), are determined by means of ab initio electronic structure theory. The VDE values of the hydrated clusters are calculated with second-order Moller-Plesset perturbation (MP2) and coupled cluster theory using the 6-311++G(d,p) set of basis functions. The bulk VDE value of an aqueous carbonate radical anion solution is predicted to be 10.6 eV from the calculated weighted average VDE values of the CO(3)(*-).nH(2)O clusters. UV/Vis absorption spectra of the hydrated clusters are calculated by means of time-dependent density functional theory using the Becke three-parameter nonlocal exchange and the Lee-Yang-Parr nonlocal correlation functional (B3LYP). The simulated UV/Vis spectrum of the CO(3)(*-).8H(2)O cluster is in excellent agreement with the reported experimental spectrum for CO(3)(*-) (aq), obtained based on pulse radiolysis experiments.

  5. National Bureau Of Standards Data Base Of Photon Absorption Cross Sections From 10 eV To 100 deV

    NASA Astrophysics Data System (ADS)

    Saloman, E. B.; Hubbell, J. H.; Berger, M. J.

    1988-07-01

    The National Bureau of Standards (NBS) has maintained a data base of experimental and theoretical photon absorption cross sections (attenuation coefficients) since 1950. Currently the measured data include more than 20,000 data points abstracted from more than 500 independen.t literature sources including both published and unpublished reports and private communications. We have recently completed a systematic comparison over the energy range 0.1-100 keV of the measured cross sections in the NBS data base with cross sections obtained using the photoionization cross sections calculated by Scofield and the semi-empirical set of recommended photoionization cross section values of Henke et al. Cross sections for coherent and incoherent scattering were added to that of photoionization to obtain a value which could be compared to the experimental results. At energies above 1 keV, agreement between theory and experiment is rather good except for some special situations which prevent the accurate description of the measured samples as free atoms. These include molecular effects near absorption edges and solid state and crystal effects (such as for silicon). Below 1 keV the comparison indicates the range of atomic numbers and energies where the theory becomes inapplicable. The results obtained using Henke et al. agree well with the measured data when such data exist, but there are many elements for which data are not available over a wide range of energies. Comparisons with other theoretical data are in progress. This study also enabled us to show that a suggested renormalization procedure to the Scofield calculation (from dartree-Slater to Hartree-Fock) worsened the agreement between the theory and experiment. We have recently developed a PC-based computer program to generate theoretical cross section values based on Scofield's calculation. We have also completed a related program to enable a user to extract selected data from the measured data base.

  6. Study of materials used for the thermal protection of the intake system for internal combustion engines

    NASA Astrophysics Data System (ADS)

    Birtok-Băneasă, C.; Raţiu, S.; Puţan, V.; Josan, A.

    2018-01-01

    The present paper focuses on calculation of thermal conductivity for a new materials developed by the authors, using the heat flux plate method. This experimental method consists in placing the sample of the new material in a calorimetric chamber and heating from underside. As the heat flux which passes through the sample material is constant and knowing the values of the temperatures for the both sides of sample, the sample material thermal conductivity is determined. Six types of different materials were tested. Based on the experimental data, the values of the thermal conductivity according to the material and the average temperature were calculated and plotted.

  7. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  8. The argon nuclear quadrupole moments

    NASA Astrophysics Data System (ADS)

    Sundholm, Dage; Pyykkö, Pekka

    2018-07-01

    New standard values -116(2) mb and 76(3) mb are suggested for the nuclear quadrupole moments (Q) of the 39Ar and 37Ar nuclei, respectively. The Q values were obtained by combining optical measurements of the quadrupole coupling constant (B or eqQ/h) of the 3s23p54s[3/2]2 (3Po) and 3s23p54p[5/2]3 (3De) states of argon with large scale numerical complete active space self-consistent field and restricted active space self-consistent field calculations of the electric field gradient at the nucleus (q) using the LUCAS code, which is a finite-element based multiconfiguration Hartree-Fock program for atomic structure calculations.

  9. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  10. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  11. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  12. Age and sex based reference values for incidental coronary artery and thoracic aorta calcifications on routine clinical chest CT: a powerful tool to appreciate available imaging findings.

    PubMed

    Jairam, Pushpa M; de Jong, Pim A; Mali, Willem P Th M; Gondrie, Martijn J A; Jacobs, Peter C A; van der Graaf, Yolanda

    2014-08-01

    To establish age and gender specific reference values for incidental coronary artery and thoracic aorta calcification scores on routine diagnostic CT scans. These reference values can aid in structured reporting and interpretation of readily available imaging data by chest CT readers in routine practice. A random sample of 1572 (57% male, median age 61 years) was taken from a study population of 12,063 subjects who underwent diagnostic chest CT for non-cardiovascular indications between January 2002 and December 2005. Coronary artery and thoracic aorta calcifications were graded using a validated ordinal score. The 25th, 50th and 75th percentile cut points were calculated for the coronary artery and thoracic aorta calcification scores within each age/gender stratum. The 75th percentile cut points for coronary artery calcification scores were higher for men than for women across all age groups, with the exception of the lowest age group. The 75th percentile cut points for thoracic aorta calcifications scores were comparable for both genders across all age groups. Based on the obtained age and gender reference values a calculation tool is provided, that allows one to enter an individual's age, gender and calcification scores to obtain the corresponding estimated percentiles. The calculation tool as provided in this study can be used in daily practice by CT readers to examine whether a subject has high calcifications scores relative to others with the same age and gender. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Expressing analytical performance from multi-sample evaluation in laboratory EQA.

    PubMed

    Thelen, Marc H M; Jansen, Rob T P; Weykamp, Cas W; Steigstra, Herman; Meijer, Ron; Cobbaert, Christa M

    2017-08-28

    To provide its participants with an external quality assessment system (EQAS) that can be used to check trueness, the Dutch EQAS organizer, Organization for Quality Assessment of Laboratory Diagnostics (SKML), has innovated its general chemistry scheme over the last decade by introducing fresh frozen commutable samples whose values were assigned by Joint Committee for Traceability in Laboratory Medicine (JCTLM)-listed reference laboratories using reference methods where possible. Here we present some important innovations in our feedback reports that allow participants to judge whether their trueness and imprecision meet predefined analytical performance specifications. Sigma metrics are used to calculate performance indicators named 'sigma values'. Tolerance intervals are based on both Total Error allowable (TEa) according to biological variation data and state of the art (SA) in line with the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Milan consensus. The existing SKML feedback reports that express trueness as the agreement between the regression line through the results of the last 12 months and the values obtained from reference laboratories and calculate imprecision from the residuals of the regression line are now enriched with sigma values calculated from the degree to which the combination of trueness and imprecision are within tolerance limits. The information and its conclusion to a simple two-point scoring system are also graphically represented in addition to the existing difference plot. By adding sigma metrics-based performance evaluation in relation to both TEa and SA tolerance intervals to its EQAS schemes, SKML provides its participants with a powerful and actionable check on accuracy.

  14. Matrix operator theory of radiative transfer. I - Rayleigh scattering.

    NASA Technical Reports Server (NTRS)

    Plass, G. N.; Kattawar, G. W.; Catchings, F. E.

    1973-01-01

    An entirely rigorous method for the solution of the equations for radiative transfer based on the matrix operator theory is reviewed. The advantages of the present method are: (1) all orders of the reflection and transmission matrices are calculated at once; (2) layers of any thickness may be combined, so that a realistic model of the atmosphere can be developed from any arbitrary number of layers, each with different properties and thicknesses; (3) calculations can readily be made for large optical depths and with highly anisotropic phase functions; (4) results are obtained for any desired value of the surface albedo including the value unity and for a large number of polar and azimuthal angles; (5) all fundamental equations can be interpreted immediately in terms of the physical interactions appropriate to the problem; and (6) both upward and downward radiance can be calculated at interior points from relatively simple expressions.

  15. Probabilistic model of bridge vehicle loads in port area based on in-situ load testing

    NASA Astrophysics Data System (ADS)

    Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong

    2017-11-01

    Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.

  16. Simplified approach to the mixed time-averaging semiclassical initial value representation for the calculation of dense vibrational spectra

    NASA Astrophysics Data System (ADS)

    Buchholz, Max; Grossmann, Frank; Ceotto, Michele

    2018-03-01

    We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.

  17. Ecological risk assessment: influence of texture on background concentration of microelements in soils of Russia.

    NASA Astrophysics Data System (ADS)

    Beketskaya, Olga

    2010-05-01

    In Russia quality standards of contaminated substances values in environment consist of ecological and sanitary rate-setting. The sanitary risk assessment base on potential risk that contaminants pose to protect human beings. The main purpose of the ecological risk assessment is to protect ecosystem. To determine negative influence on living organisms in the sanitary risk assessment in Russia we use MPC. This value of contaminants show how substances affected on different part of environment, biological activity and soil processes. The ecological risk assessment based on comparison compounds concentration with background concentration for definite territories. Taking into account high interval of microelements value in soils, we suggest using statistic method for determination of concentration levels of chemical elements concentration in soils of Russia. This method is based on determination middle levels of elements content in natural condition. The top limit of middle chemical elements concentration in soils is value, which exceed middle regional background level in three times standard deviation. The top limit of natural concentration excess we can explain as anthropogenic impact. At first we study changing in the middle content value of microelements in soils of geographic regions in European part of Russia on the basis of cartographical analysis. Cartographical analysis showed that the soil of mountainous and mountain surrounding regions is enriched with microelements. On the plain territory of European part of Russia for most of microelements was noticed general direction of increasing their concentration in soils from north to south, also in the same direction soil clay content rise for majority of soils. For all other territories a clear connection has been noticed between the distribution of sand sediment. By our own investigation and data from scientific literature data base was created. This data base consist of following soil properties: texture, organic matter content, concentration of microelements and pH value. On the basis of this data base massive of data for Forest-steppe and Steppe regions was create, which was divided by texture. For all data statistics method was done and was calculated maximum level natural microelements content for soils with different texture (?+3*δ). As a result of our statistic calculation we got middle and the top limit of background concentration of microelements in sandy and clay soils (conditional border - sandy loam) of two regions. We showed, that for all territory of European part of Russia and for Forest-steppe and Steppe regions separately middle content and maximum level natural microelements concentrations (?+3*σ) are higher in clay soils, rather then in sandy soils. Data characterizing soils, in different regions, of similar texture differs less than the data collected for sandy and clay soils of the same region. After all this calculation we can notice, that data of middle and top limit of background microelements concentration in soils, based on statistic method, can be used in the aim of ecological risk assessment. Using offered method allow to calculate top limit of background concentration for sandy and clay soils for large-scale geographic regions, exceeding which will be evidence of anthropogenic contamination of soil.

  18. Development of an inpatient operational pharmacy productivity model.

    PubMed

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. An Algorithm for the Calculation of Exact Term Discrimination Values.

    ERIC Educational Resources Information Center

    Willett, Peter

    1985-01-01

    Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…

  20. Effect of blood sampling schedule and method of calculating the area under the curve on validity and precision of glycaemic index values.

    PubMed

    Wolever, Thomas M S

    2004-02-01

    To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.

  1. Adaptive imaging through far-field turbulence

    NASA Astrophysics Data System (ADS)

    Troxel, Steven E.; Welsh, Byron M.; Roggemann, Michael C.

    1993-11-01

    This paper presents a new method for calculating the field angle dependent average OTF of an adaptive optic system and compares this method to calculations based on geometric optics. Geometric optics calculations are shown to be inaccurate due to the diffraction effects created by far-field turbulence and the approximations made in the atmospheric parameters. Our analysis includes diffraction effects and properly accounts for the effect of the atmospheric turbulence scale sizes. We show that for any atmospheric C(superscript 2)(subscript n) profile, the actual OTF is always better than the OTF calculated using geometric optics. The magnitude of the difference between the calculation methods is shown to be dependent on the amount of far- field turbulence and the values of the outer scale dimension.

  2. An Novel Continuation Power Flow Method Based on Line Voltage Stability Index

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan

    2018-01-01

    An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.

  3. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  4. 19 CFR 351.406 - Calculation of normal value if sales are made at less than cost of production.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value if sales are made at less than cost of production. 351.406 Section 351.406 Customs Duties INTERNATIONAL TRADE ADMINISTRATION... Price, Fair Value, and Normal Value § 351.406 Calculation of normal value if sales are made at less than...

  5. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  6. Theoretical rate constants of super-exchange hole transfer and thermally induced hopping in DNA.

    PubMed

    Shimazaki, Tomomi; Asai, Yoshihiro; Yamashita, Koichi

    2005-01-27

    Recently, the electronic properties of DNA have been extensively studied, because its conductivity is important not only to the study of fundamental biological problems, but also in the development of molecular-sized electronics and biosensors. We have studied theoretically the reorganization energies, the activation energies, the electronic coupling matrix elements, and the rate constants of hole transfer in B-form double-helix DNA in water. To accommodate the effects of DNA nuclear motions, a subset of reaction coordinates for hole transfer was extracted from classical molecular dynamics (MD) trajectories of DNA in water and then used for ab initio quantum chemical calculations of electron coupling constants based on the generalized Mulliken-Hush model. A molecular mechanics (MM) method was used to determine the nuclear Franck-Condon factor. The rate constants for two types of mechanisms of hole transfer-the thermally induced hopping (TIH) and the super-exchange mechanisms-were determined based on Marcus theory. We found that the calculated matrix elements are strongly dependent on the conformations of the nucleobase pairs of hole-transferable DNA and extend over a wide range of values for the "rise" base-step parameter but cluster around a particular value for the "twist" parameter. The calculated activation energies are in good agreement with experimental results. Whereas the rate constant for the TIH mechanism is not dependent on the number of A-T nucleobase pairs that act as a bridge, the rate constant for the super-exchange process rapidly decreases when the length of the bridge increases. These characteristic trends in the calculated rate constants effectively reproduce those in the experimental data of Giese et al. [Nature 2001, 412, 318]. The calculated rate constants were also compared with the experimental results of Lewis et al. [Nature 2000, 406, 51].

  7. Freshwater Mussel Shell δ13C Values as a Proxy for δ13CDIC in a Polluted, Temperate River

    NASA Astrophysics Data System (ADS)

    Graniero, L. E.; Gillikin, D. P.; Surge, D. M.

    2017-12-01

    Freshwater mussel shell δ13C values have been examined as an indicator of ambient δ13C composition of dissolved inorganic carbon (DIC) in temperate rivers. However, shell δ13C values may be obscured by the assimilation of respired, metabolic carbon (CM) derived from the organism's diet. Water δ18O and δ13CDIC values were collected fortnightly from August 2015 through July 2017 from three sites (one agricultural, one downstream of a wastewater treatment plant, one urban) in the Neuse River, NC to test the reliability of Elliptio complanata shell δ13C values as a proxy for δ13CDIC values. Muscle, mantle, gill, and stomach δ13C values were analyzed to approximate the %CM incorporated into the shell. All tissue δ13C values were within 2‰ of each other, which equates to a ±1% difference in calculated %CM. As such, muscle tissue δ13C values will be used for calculating the %CM, because they have the slowest turnover rate of the tissues sampled. Water temperature and δ18O values were used to calculate predicted aragonite shell δ18O­ values (δ18O­ar) based on the aragonite-water fractionation relationship. To assign dates to each shell microsample, predicted δ18O­ar values were compared to high-resolution serially sampled shell values. Consistent with previous studies, E. complanata cease growth in winter when temperatures are below about 12ºC. Preliminary results indicate that during the growing season, shell δ13C values are lower than expected equilibrium values, reflecting the assimilation of 15% CM, on average. Shell δ13C values are not significantly different than δ13CDIC values, but do not capture the full range of δ13CDIC values during each growing season. Thus, δ13C values of E. complanata shells can be used to reliably reconstruct past δ13CDIC values within 2‰ of coeval values. Further research will investigate how differing land-use affects the relationship between shell δ13C, CM, and δ13CDIC values.

  8. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  9. Calculation of water equivalent thickness of materials of arbitrary density, elemental composition and thickness in proton beam irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Newhauser, Wayne D.

    2009-03-01

    In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.

  10. A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George

    2012-07-01

    Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.

  11. An economic-research-based approach to calculate community health-staffing requirements in Xicheng District, Beijing.

    PubMed

    Yin, Delu; Yin, Tao; Yang, Huiming; Xin, Qianqian; Wang, Lihong; Li, Ninyan; Ding, Xiaoyan; Chen, Bowen

    2016-12-07

    A shortage of community health professionals has been a crucial issue hindering the development of CHS. Various methods have been established to calculate health workforce requirements. This study aimed to use an economic-research-based approach to calculate the number of community health professionals required to provide community health services in the Xicheng District of Beijing and then assess current staffing levels against this ideal. Using questionnaires, we collected relevant data from 14 community health centers in the Xicheng District, including resident population, number of different health services provided, and service volumes. Through 36 interviews with family doctors, nurses, and public health workers, and six focus groups, we were able to calculate the person-time (equivalent value) required for each community health service. Field observations were conducted to verify the duration. In the 14 community health centers in Xicheng District, 1752 health workers were found in our four categories, serving a population of 1.278 million. Total demand for the community health service outstripped supply for doctors, nurses, and public health workers, but not other professionals. The method suggested that to properly serve the study population an additional 64 family doctors, 40 nurses, and 753 public health workers would be required. Our calculations indicate that significant numbers of new health professionals are required to deliver community health services. We established time standards in minutes (equivalent value) for each community health service activity, which could be applied elsewhere in China by government planners and civil society advocates.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, S; Mazur, T; Li, H

    Purpose: The aim of this paper was to demonstrate the feasibility and creditability of computing and verifying 3D fluencies to assure IMRT and VMAT treatment deliveries, by correlating the passing rates of the 3D fluence-based QA (P(ά)) to the passing rates of 2D dose measurementbased QA (P(Dm)). Methods: 3D volumetric primary fluencies are calculated by forward-projecting the beam apertures and modulated by beam MU values at all gantry angles. We first introduce simulated machine parameter errors (MU, MLC positions, jaw, gantry and collimator) to the plan. Using passing rates of voxel intensity differences (P(Ir)) and 3D gamma analysis (P(γ)), calculatedmore » 3D fluencies, calculated 3D delivered dose, and measured 2D planar dose in phantom from the original plan are then compared with those from corresponding plans with errors, respectively. The correlations of these three groups of resultant passing rates, i.e. 3D fluence-based QA (P(ά,Ir) and P(ά,γ)), calculated 3D dose (P(Dc,Ir) and P(Dc,γ)), and 2D dose measurement-based QA (P(Dm,Ir) and P(Dm,γ)), will be investigated. Results: 20 treatment plans with 5 different types of errors were tested. Spearman’s correlations were found between P(ά,Ir) and P(Dc,Ir), and also between P(ά,γ) and P(Dc,γ), with averaged p-value 0.037, 0.065, and averaged correlation coefficient ρ-value 0.942, 0.871 respectively. Using Matrixx QA for IMRT plans, Spearman’s correlations were also obtained between P(ά,Ir) and P(Dm,Ir) and also between P(ά,γ) and P(Dm,γ), with p-value being 0.048, 0.071 and ρ-value being 0.897, 0.779 respectively. Conclusion: The demonstrated correlations improve the creditability of using 3D fluence-based QA for assuring treatment deliveries for IMRT/VMAT plans. Together with advantages of high detection sensitivity and better visualization of machine parameter errors, this study further demonstrates the accuracy and feasibility of 3D fluence based-QA in pre-treatment QA and daily QA. Research reported in this study is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less

  13. Energy barriers and rates of tautomeric transitions in DNA bases: ab initio quantum chemical study.

    PubMed

    Basu, Soumalee; Majumdar, Rabi; Das, Gourab K; Bhattacharyya, Dhananjay

    2005-12-01

    Tautomeric transitions of DNA bases are proton transfer reactions, which are important in biology. These reactions are involved in spontaneous point mutations of the genetic material. In the present study, intrinsic reaction coordinates (IRC) analyses through ab initio quantum chemical calculations have been carried out for the individual DNA bases A, T, G, C and also A:T and G:C base pairs to estimate the kinetic and thermodynamic barriers using MP2/6-31G** method for tautomeric transitions. Relatively higher values of kinetic barriers (about 50-60 kcal/mol) have been observed for the single bases, indicating that tautomeric alterations of isolated single bases are quite unlikely. On the other hand, relatively lower values of the kinetic barriers (about 20-25 kcal/mol) for the DNA base pairs A:T and G:C clearly suggest that the tautomeric shifts are much more favorable in DNA base pairs than in isolated single bases. The unusual base pairing A':C, T':G, C':A or G':T in the daughter DNA molecule, resulting from a parent DNA molecule with tautomeric shifts, is found to be stable enough to result in a mutation. The transition rate constants for the single DNA bases in addition to the base pairs are also calculated by computing the free energy differences between the transition states and the reactants.

  14. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  15. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  16. The deoxyribonucleic acid of Micrococcus radiodurans

    PubMed Central

    Schein, Arnold H.

    1966-01-01

    The DNA of Micrococcus radiodurans was prepared by three methods. Although the recovery of DNA varied considerably, the percentage molar base ratios of the DNA from the three preparations were essentially the same: guanine, 33±2; adenine, 18±1; cytosine, 33±2; thymine, 17±1. Base compositions calculated from Tm values and from density in caesium chloride gradients also yielded guanine+cytosine contents of 66 and 68% of total bases respectively. No unusual bases were observed. The S20,w values were characteristic of high-molecular-weight DNA. Electron microscopy showed the purified DNA in long strands; occasionally these were coiled. Images(a)(b)(c)(d)(e)Fig. 1. PMID:16742439

  17. Study of the acid-base properties of mineral soil horizons using pK spectroscopy

    NASA Astrophysics Data System (ADS)

    Shamrikova, E. V.; Vanchikova, E. V.; Ryazanov, M. A.

    2007-11-01

    The presence of groups 4 and 5 participating in acid-base equilibria was revealed in samples from mineral horizons of the gley-podzolic soil of the Komi Republic using pK spectroscopy (the mathematical processing of potentiometric titration curves for plotting the distribution of acid groups according to their pK values). The specific quantity of acid-base sites in soil samples was calculated. The contribution of organic and mineral soil components to the groups of acid-base sites was estimated. The pK values of groups determining the potential, exchangeable, and unexchangeable acidities were found. The heterogeneity of acid components determining different types of soil acidity was revealed.

  18. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation.

    PubMed

    Yakimov, Eugene B

    2016-06-01

    An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Fragment-based approach to calculate hydrophobicity of anionic and nonionic surfactants derived from chromatographic retention on a C18 stationary phase.

    PubMed

    Hammer, Jort; Haftka, Joris J-H; Scherpenisse, Peter; Hermens, Joop L M; de Voogt, Pim W P

    2017-02-01

    To predict the fate and potential effects of organic contaminants, information about their hydrophobicity is required. However, common parameters to describe the hydrophobicity of organic compounds (e.g., octanol-water partition constant [K OW ]) proved to be inadequate for ionic and nonionic surfactants because of their surface-active properties. As an alternative approach to determine their hydrophobicity, the aim of the present study was therefore to measure the retention of a wide range of surfactants on a C 18 stationary phase. Capacity factors in pure water (k' 0 ) increased linearly with increasing number of carbon atoms in the surfactant structure. Fragment contribution values were determined for each structural unit with multilinear regression, and the results were consistent with the expected influence of these fragments on the hydrophobicity of surfactants. Capacity factors of reference compounds and log K OW values from the literature were used to estimate log K OW values for surfactants (log KOWHPLC). These log KOWHPLC values were also compared to log K OW values calculated with 4 computational programs: KOWWIN, Marvin calculator, SPARC, and COSMOThermX. In conclusion, capacity factors from a C 18 stationary phase are found to better reflect hydrophobicity of surfactants than their K OW values. Environ Toxicol Chem 2017;36:329-336. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc. on behalf of SETAC.

  20. Using physiologically based pharmacokinetic modeling and benchmark dose methods to derive an occupational exposure limit for N-methylpyrrolidone.

    PubMed

    Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R

    2016-04-01

    The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Calculation and affection of pH value of different desulfurization and dehydration rates in the filling station based on Aspen Plus

    NASA Astrophysics Data System (ADS)

    Lv, J. X.; Wang, B. F.; Nie, L. H.; Xu, R. R.; Zhou, J. Y.; Hao, Y. J.

    2018-01-01

    The simulation process of the whole CNG filling station are established using Aspen Plus V7.2. The separator (Sep) was used to simulate the desulfurization and dehydration equipment in the gas station, and the flash module separator Flash 2 was used to simulate the gas storage well with proper temperature and environmental pressure. Furthermore, the sensitivity module was used to analyse the behaviour of the dehydration and desulfurization rate, and the residual pH value of the gas storage wells was between 2.2 and 3.3. The results indicated that the effect of water content on pH value is higher than that of hydrogen sulphide in the environment of gas storage wells, and the calculation process of the pH value is feasible. Additionally, the simulation process provides basic data for the subsequent anticorrosive mechanism and work of gas storage well and has great potential for practical applications.

  2. A Monte Carlo study of the impact of the choice of rectum volume definition on estimates of equivalent uniform doses and the volume parameter

    NASA Astrophysics Data System (ADS)

    Kvinnsland, Yngve; Muren, Ludvig Paul; Dahl, Olav

    2004-08-01

    Calculations of normal tissue complication probability (NTCP) values for the rectum are difficult because it is a hollow, non-rigid, organ. Finding the true cumulative dose distribution for a number of treatment fractions requires a CT scan before each treatment fraction. This is labour intensive, and several surrogate distributions have therefore been suggested, such as dose wall histograms, dose surface histograms and histograms for the solid rectum, with and without margins. In this study, a Monte Carlo method is used to investigate the relationships between the cumulative dose distributions based on all treatment fractions and the above-mentioned histograms that are based on one CT scan only, in terms of equivalent uniform dose. Furthermore, the effect of a specific choice of histogram on estimates of the volume parameter of the probit NTCP model was investigated. It was found that the solid rectum and the rectum wall histograms (without margins) gave equivalent uniform doses with an expected value close to the values calculated from the cumulative dose distributions in the rectum wall. With the number of patients available in this study the standard deviations of the estimates of the volume parameter were large, and it was not possible to decide which volume gave the best estimates of the volume parameter, but there were distinct differences in the mean values of the values obtained.

  3. Delta13C values of grasses as a novel indicator of pollution by fossil-fuel-derived greenhouse gas CO2 in urban areas.

    PubMed

    Lichtfouse, Eric; Lichtfouse, Michel; Jaffrézic, Anne

    2003-01-01

    A novel fossil fuel pollution indicator based on the 13C/12C isotopic composition of plants has been designed. This bioindicator is a promising tool for future mapping of the sequestration of fossil fuel CO2 into urban vegetation. Theoretically, plants growing in fossil-fuel-CO2-contaminated areas, such as major cities, industrial centers, and highway borders, should assimilate a mixture of global atmospheric CO2 of delta13C value of -8.02 per thousand and of fossil fuel CO2 of average delta13C value of -27.28 per thousand. This isotopic difference should, thus, be recorded in plant carbon. Indeed, this study reveals that grasses growing near a major highway in Paris, France, have strikingly depleted delta13C values, averaging at -35.08 per thousand, versus rural grasses that show an average delta13C value of -30.59 per thousand. A simple mixing model was used to calculate the contributions of fossil-fuel-derived CO2 to the plant tissue. Calculation based on contaminated and noncontaminated isotopic end members shows that urban grasses assimilate up to 29.1% of fossil-fuel-CO2-derived carbon in their tissues. The 13C isotopic composition of grasses thus represents a promising new tool for the study of the impact of fossil fuel CO2 in major cities.

  4. Determination of Watershed Lag Equation for Philippine Hydrology

    NASA Astrophysics Data System (ADS)

    Cipriano, F. R.; Lagmay, A. M. F. A.; Uichanco, C.; Mendoza, J.; Sabio, G.; Punay, K. N.; Oquindo, M. R.; Horritt, M.

    2014-12-01

    Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are some of the damages caused by flooding and the country's government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps, different types of data were needed and part of that is calculating hydrological components to come up with an accurate output. This paper presents how an important parameter, the time-to-peak of the watershed (Tp) was calculated. Time-to-peak is defined as the time at which the largest discharge of the watershed occurs. This is computed by using a lag time equation that was developed specifically for the Philippine setting. The equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S), and watershed slope (Y). This approach is based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the time-to-peak. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. Values of Tp from the different sensors were generated from the general lag time equation based on the Natural Resource Conservation Management handbook by the US Department of Agriculture. The calculated Tp values were plotted against the values obtained from the equation L0.8(S+1)0.7/Y0.5. Regression analysis was used to obtain the final equation that would be used to calculate the time-to-peak specifically for rivers in the Philippine setting. The calculated values could then be used as a parameter for modeling different flood scenarios in the country.

  5. Complexity metric based on fraction of penumbra dose - initial study

    NASA Astrophysics Data System (ADS)

    Bäck, A.; Nordström, F.; Gustafsson, M.; Götstedt, J.; Karlsson Hauer, A.

    2017-05-01

    Volumetric modulated arc therapy improve radiotherapy outcome for many patients compared to conventional three dimensional conformal radiotherapy but require a more extensive, most often measurement based, quality assurance. Multi leaf collimator (MLC) aperture-based complexity metrics have been suggested to be used to distinguish complex treatment plans unsuitable for treatment without time consuming measurements. This study introduce a spatially resolved complexity score that correlate to the fraction of penumbra dose and will give information on the spatial distribution and the clinical relevance of the calculated complexity. The complexity metric is described and an initial study on the correlation between the complexity score and the difference between measured and calculated dose for 30 MLC openings is presented. The result of an analysis of the complexity scores were found to correlate to differences between measurements and calculations with a Pearson’s r-value of 0.97.

  6. A theory for the fracture of thin plates subjected to bending and twisting moments

    NASA Technical Reports Server (NTRS)

    Hui, C. Y.; Zehnder, Alan T.

    1993-01-01

    Stress fields near the tip of a through crack in an elastic plate under bending and twisting moments are reviewed assuming both Kirchhoff and Reissner plate theories. The crack tip displacement and rotation fields based on the Reissner theory are calculated. These results are used to calculate the J-integral (energy release rate) for both Kirchhoff and Reissner plate theories. Invoking Simmonds and Duva's (1981) result that the value of the J-integral based on either theory is the same for thin plates, a universal relationship between the Kirchhoff theory stress intensity factors and the Reissner theory stress intensity factors is obtained for thin plates. Calculation of Kirchhoff theory stress intensity factors from finite elements based on energy release rate is illustrated. It is proposed that, for thin plates, fracture toughness and crack growth rates be correlated with the Kirchhoff theory stress intensity factors.

  7. Effect of wave function on the proton induced L XRP cross sections for {sub 62}Sm and {sub 74}W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shehla,; Kaur, Rajnish; Kumar, Anil

    The L{sub k}(k= 1, α, β, γ) X-ray production cross sections have been calculated for {sub 74}W and {sub 62}Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared withmore » the measured cross sections reported in the recent compilation to check the reliability of the calculated values.« less

  8. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  9. Target virus log10 reduction values determined for two reclaimed wastewater irrigation scenarios in Japan based on tolerable annual disease burden.

    PubMed

    Ito, Toshihiro; Kitajima, Masaaki; Kato, Tsuyoshi; Ishii, Satoshi; Segawa, Takahiro; Okabe, Satoshi; Sano, Daisuke

    2017-11-15

    Multiple-barriers are widely employed for managing microbial risks in water reuse, in which different types of wastewater treatment units (biological treatment, disinfection, etc.) and health protection measures (use of personal protective gear, vegetable washing, etc.) are combined to achieve a performance target value of log 10 reduction (LR) of viruses. The LR virus target value needs to be calculated based on the data obtained from monitoring the viruses of concern and the water reuse scheme in the context of the countries/regions where water reuse is implemented. In this study, we calculated the virus LR target values under two exposure scenarios for reclaimed wastewater irrigation in Japan, using the concentrations of indigenous viruses in untreated wastewater and a defined tolerable annual disease burden (10 -4 or 10 -6 disability-adjusted life years per person per year (DALY pppy )). Three genogroups of norovirus (norovirus genogroup I (NoV GI), geogroup II (NoV GII), and genogroup IV (NoV GIV)) in untreated wastewater were quantified as model viruses using reverse transcription-microfluidic quantitative PCR, and only NoV GII was present in quantifiable concentration. The probabilistic distribution of NoV GII concentration in untreated wastewater was then estimated from its concentration dataset, and used to calculate the LR target values of NoV GII for wastewater treatment. When an accidental ingestion of reclaimed wastewater by Japanese farmers was assumed, the NoV GII LR target values corresponding to the tolerable annual disease burden of 10 -6 DALY pppy were 3.2, 4.4, and 5.7 at 95, 99, and 99.9%tile, respectively. These percentile values, defined as "reliability," represent the cumulative probability of NoV GII concentration distribution in untreated wastewater below the corresponding tolerable annual disease burden after wastewater reclamation. An approximate 1-log 10 difference of LR target values was observed between 10 -4 and 10 -6 DALY pppy . The LR target values were influenced mostly by the change in the logarithmic standard deviation (SD) values of NoV GII concentration in untreated wastewater and the reliability values, which highlights the importance of accurately determining the probabilistic distribution of reference virus concentrations in source water for water reuse. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  11. The Prevalence of Nocturia and Nocturnal Polyuria: Can New Cutoff Values Be Suggested According to Age and Sex?

    PubMed Central

    2016-01-01

    Purpose The aims of this study were to assess the prevalence of nocturia and nocturnal polyuria (NP) and to define new cutoff values according to age and sex for both conditions. Methods Data from a population-based prevalence survey conducted among a random sample of 2,128 adults were analyzed in this study. Participants were requested to fill out a questionnaire including the International Continence Society (ICS) definitions of lower urinary tract symptoms and the International Consultation on Incontinence Questionnaire - Short Form. Additionally, a 1-day bladder diary was given to each individual. The participants were divided into 5 age groups. The prevalence of nocturia was calculated based on definitions of nocturia as ≥1 voiding episodes, ≥2 episodes, and ≥3 episodes. NP was evaluated according to the ICS definition. The mean±standard errors and 95th percentile values were calculated in each group as new cutoff values for NP. Results The prevalence of nocturia was estimated as 28.4%, 17.6%, and 8.9% for ≥1, ≥2, and ≥3 voiding episodes each night, respectively. When nocturia was defined as 2 or more voiding episodes at night, the prevalence decreased significantly. The mean NP index was 29.4%±15.0% in men and 23.1%±11.8% in women. For the age groups of <50 years, 50–59 years, and ≥60 years, the new cutoff values for the diagnosis of NP were calculated as 48%, 69%, and 59% for men and 41%, 50%, and 42% for women, respectively. Conclusions We found that the definition of nocturia was still controversial and that waking up once for voiding might be within the normal spectrum of behavior. The definition of NP should be modified, and new cutoff values should be defined using the data presented in our study and in other forthcoming studies. PMID:28043108

  12. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  13. A proposed selection index for feedlot profitability based on estimated breeding values.

    PubMed

    van der Westhuizen, R R; van der Westhuizen, J

    2009-04-22

    It is generally accepted that feed intake and growth (gain) are the most important economic components when calculating profitability in a growth test or feedlot. We developed a single post-weaning growth (feedlot) index based on the economic values of different components. Variance components, heritabilities and genetic correlations for and between initial weight (IW), final weight (FW), feed intake (FI), and shoulder height (SHD) were estimated by multitrait restricted maximum likelihood procedures. The estimated breeding values (EBVs) and the economic values for IW, FW and FI were used in a selection index to estimate a post-weaning or feedlot profitability value. Heritabilities for IW, FW, FI, and SHD were 0.41, 0.40, 0.33, and 0.51, respectively. The highest genetic correlations were 0.78 (between IW and FW) and 0.70 (between FI and FW). EBVs were used in a selection index to calculate a single economical value for each animal. This economic value is an indication of the gross profitability value or the gross test value (GTV) of the animal in a post-weaning growth test. GTVs varied between -R192.17 and R231.38 with an average of R9.31 and a standard deviation of R39.96. The Pearson correlations between EBVs (for production and efficiency traits) and GTV ranged from -0.51 to 0.68. The lowest correlation (closest to zero) was 0.26 between the Kleiber ratio and GTV. Correlations of 0.68 and -0.51 were estimated between average daily gain and GTV and feed conversion ratio and GTV, respectively. These results showed that it is possible to select for GTV. The selection index can benefit feedlotting in selecting offspring of bulls with high GTVs to maximize profitability.

  14. Full value documentation in the Czech Food Composition Database.

    PubMed

    Machackova, M; Holasova, M; Maskova, E

    2010-11-01

    The aim of this project was to launch a new Food Composition Database (FCDB) Programme in the Czech Republic; to implement a methodology for food description and value documentation according to the standards designed by the European Food Information Resource (EuroFIR) Network of Excellence; and to start the compilation of a pilot FCDB. Foods for the initial data set were selected from the list of foods included in the Czech Food Consumption Basket. Selection of 24 priority components was based on the range of components used in former Czech tables. The priority list was extended with components for which original Czech analytical data or calculated data were available. Values that were input into the compiled database were documented according to the EuroFIR standards within the entities FOOD, COMPONENT, VALUE and REFERENCE using Excel sheets. Foods were described using the LanguaL Thesaurus. A template for documentation of data according to the EuroFIR standards was designed. The initial data set comprised documented data for 162 foods. Values were based on original Czech analytical data (available for traditional and fast foods, milk and milk products, wheat flour types), data derived from literature (for example, fruits, vegetables, nuts, legumes, eggs) and calculated data. The Czech FCDB programme has been successfully relaunched. Inclusion of the Czech data set into the EuroFIR eSearch facility confirmed compliance of the database format with the EuroFIR standards. Excel spreadsheets are applicable for full value documentation in the FCDB.

  15. [Colorimetric characterization of LCD based on wavelength partition spectral model].

    PubMed

    Liu, Hao-Xue; Cui, Gui-Hua; Huang, Min; Wu, Bing; Xu, Yan-Fang; Luo, Ming

    2013-10-01

    To establish a colorimetrical characterization model of LCDs, an experiment with EIZO CG19, IBM 19, DELL 19 and HP 19 LCDs was designed and carried out to test the interaction between RGB channels, and then to test the spectral additive property of LCDs. The RGB digital values of single channel and two channels were given and the corresponding tristimulus values were measured, then a chart was plotted and calculations were made to test the independency of RGB channels. The results showed that the interaction between channels was reasonably weak and spectral additivity property was held well. We also found that the relations between radiations and digital values at different wavelengths varied, that is, they were the functions of wavelength. A new calculation method based on piecewise spectral model, in which the relation between radiations and digital values was fitted by a cubic polynomial in each piece of wavelength with measured spectral radiation curves, was proposed and tested. The spectral radiation curves of RGB primaries with any digital values can be found out with only a few measurements and fitted cubic polynomial in this way and then any displayed color can be turned out by the spectral additivity property of primaries at given digital values. The algorithm of this method was discussed in detail in this paper. The computations showed that the proposed method was simple and the number of measurements needed was reduced greatly while keeping a very high computation precision. This method can be used as a colorimetrical characterization model.

  16. Using Clinical Data Standards to Measure Quality: A New Approach.

    PubMed

    D'Amore, John D; Li, Chun; McCrary, Laura; Niloff, Jonathan M; Sittig, Dean F; McCoy, Allison B; Wright, Adam

    2018-04-01

     Value-based payment for care requires the consistent, objective calculation of care quality. Previous initiatives to calculate ambulatory quality measures have relied on billing data or individual electronic health records (EHRs) to calculate and report performance. New methods for quality measure calculation promoted by federal regulations allow qualified clinical data registries to report quality outcomes based on data aggregated across facilities and EHRs using interoperability standards.  This research evaluates the use of clinical document interchange standards as the basis for quality measurement.  Using data on 1,100 patients from 11 ambulatory care facilities and 5 different EHRs, challenges to quality measurement are identified and addressed for 17 certified quality measures.  Iterative solutions were identified for 14 measures that improved patient inclusion and measure calculation accuracy. Findings validate this approach to improving measure accuracy while maintaining measure certification.  Organizations that report care quality should be aware of how identified issues affect quality measure selection and calculation. Quality measure authors should consider increasing real-world validation and the consistency of measure logic in respect to issues identified in this research. Schattauer GmbH Stuttgart.

  17. Measurement of J-integral in CAD/CAM dental ceramics and composite resin by digital image correlation.

    PubMed

    Jiang, Yanxia; Akkus, Anna; Roperto, Renato; Akkus, Ozan; Li, Bo; Lang, Lisa; Teich, Sorin

    2016-09-01

    Ceramic and composite resin blocks for CAD/CAM machining of dental restorations are becoming more common. The sample sizes affordable by these blocks are smaller than ideal for stress intensity factor (SIF) based tests. The J-integral measurement calls for full field strain measurement, making it challenging to conduct. Accordingly, the J-integral values of dental restoration materials used in CAD/CAM restorations have not been reported to date. Digital image correlation (DIC) provides full field strain maps, making it possible to calculate the J-integral value. The aim of this study was to measure the J-integral value for CAD/CAM restorative materials. Four types of materials (sintered IPS E-MAX CAD, non-sintered IPS E-MAX CAD, Vita Mark II and Paradigm MZ100) were used to prepare beam samples for three-point bending tests. J-integrals were calculated for different integral path size and locations with respect to the crack tip. J-integral at path 1 for each material was 1.26±0.31×10(-4)MPam for MZ 100, 0.59±0.28×10(-4)MPam for sintered E-MAX, 0.19±0.07×10(-4)MPam for VM II, and 0.21±0.05×10(-4)MPam for non-sintered E-MAX. There were no significant differences between different integral path size, except for the non-sintered E-MAX group. J-integral paths of non-sintered E-MAX located within 42% of the height of the sample provided consistent values whereas outside this range resulted in lower J-integral values. Moreover, no significant difference was found among different integral path locations. The critical SIF was calculated from J-integral (KJ) along with geometry derived SIF values (KI). KI values were comparable with KJ and geometry based SIF values obtained from literature. Therefore, DIC derived J-integral is a reliable way to assess the fracture toughness of small sized specimens for dental CAD/CAM restorative materials; however, with caution applied to the selection of J-integral path. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Digital movie-based on automatic titrations.

    PubMed

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. 19 CFR 351.408 - Calculation of normal value of merchandise from nonmarket economy countries.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value of merchandise from nonmarket economy countries. 351.408 Section 351.408 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and Normal Value §...

  20. Approximate relations and charts for low-speed stability derivatives of swept wings

    NASA Technical Reports Server (NTRS)

    Toll, Thomas A; Queijo, M J

    1948-01-01

    Contains derivations, based on a simplified theory, of approximate relations for low-speed stability derivatives of swept wings. Method accounts for the effects and, in most cases, taper ratio. Charts, based on the derived relations, are presented for the stability derivatives of untapered swept wings. Calculated values of the derivatives are compared with experimental results.

Top