Sample records for calculated results based

  1. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  2. One-electron oxidation of individual DNA bases and DNA base stacks.

    PubMed

    Close, David M

    2010-02-04

    In calculations performed with DFT there is a tendency of the purine cation to be delocalized over several bases in the stack. Attempts have been made to see if methods other than DFT can be used to calculate localized cations in stacks of purines, and to relate the calculated hyperfine couplings with known experimental results. To calculate reliable hyperfine couplings it is necessary to have an adequate description of spin polarization which means that electron correlation must be treated properly. UMP2 theory has been shown to be unreliable in estimating spin densities due to overestimates of the doubles correction. Therefore attempts have been made to use quadratic configuration interaction (UQCISD) methods to treat electron correlation. Calculations on the individual DNA bases are presented to show that with UQCISD methods it is possible to calculate hyperfine couplings in good agreement with the experimental results. However these UQCISD calculations are far more time-consuming than DFT calculations. Calculations are then extended to two stacked guanine bases. Preliminary calculations with UMP2 or UQCISD theory on two stacked guanines lead to a cation localized on a single guanine base.

  3. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  4. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less

  5. Calculations of a wideband metamaterial absorber using equivalent medium theory

    NASA Astrophysics Data System (ADS)

    Huang, Xiaojun; Yang, Helin; Wang, Danqi; Yu, Shengqing; Lou, Yanchao; Guo, Ling

    2016-08-01

    Metamaterial absorbers (MMAs) have drawn increasing attention in many areas due to the fact that they can achieve electromagnetic (EM) waves with unity absorptivity. We demonstrate the design, simulation, experiment and calculation of a wideband MMA based on a loaded double-square-loop (DSL) array of chip resisters. For a normal incidence EM wave, the simulated results show that the absorption of the full width at half maximum is about 9.1 GHz, and the relative bandwidth is 87.1%. Experimental results are in agreement with the simulations. More importantly, equivalent medium theory (EMT) is utilized to calculate the absorptions of the DSL MMA, and the calculated absorptions based on EMT agree with the simulated and measured results. The method based on EMT provides a new way to analysis the mechanism of MMAs.

  6. Variation Among Internet Based Calculators in Predicting Spontaneous Resolution of Vesicoureteral Reflux

    PubMed Central

    Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.

    2010-01-01

    Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550

  7. Effect of BrU on the transition between wobble Gua-Thy and tautomeric Gua-Thy base-pairs: ab initio molecular orbital calculations

    NASA Astrophysics Data System (ADS)

    Nomura, Kazuya; Hoshino, Ryota; Hoshiba, Yasuhiro; Danilov, Victor I.; Kurita, Noriyuki

    2013-04-01

    We investigated transition states (TS) between wobble Guanine-Thymine (wG-T) and tautomeric G-T base-pair as well as Br-containing base-pairs by MP2 and density functional theory (DFT) calculations. The obtained TS between wG-T and G*-T (asterisk is an enol-form of base) is different from TS got by the previous DFT calculation. The activation energy (17.9 kcal/mol) evaluated by our calculation is significantly smaller than that (39.21 kcal/mol) obtained by the previous calculation, indicating that our TS is more preferable. In contrast, the obtained TS and activation energy between wG-T and G-T* are similar to those obtained by the previous DFT calculation. We furthermore found that the activation energy between wG-BrU and tautomeric G-BrU is smaller than that between wG-T and tautomeric G-T. This result elucidates that the replacement of CH3 group of T by Br increases the probability of the transition reaction producing the enol-form G* and T* bases. Because G* prefers to bind to T rather than to C, and T* to G not A, our calculated results reveal that the spontaneous mutation from C to T or from A to G base is accelerated by the introduction of wG-BrU base-pair.

  8. Simulation and analysis of main steam control system based on heat transfer calculation

    NASA Astrophysics Data System (ADS)

    Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai

    2018-05-01

    In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.

  9. [Interactions of DNA bases with individual water molecules. Molecular mechanics and quantum mechanics computation results vs. experimental data].

    PubMed

    Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I

    2013-01-01

    To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.

  10. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    NASA Astrophysics Data System (ADS)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  11. A hypersonic aeroheating calculation method based on inviscid outer edge of boundary layer parameters

    NASA Astrophysics Data System (ADS)

    Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin

    2016-12-01

    This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.

  12. Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.

    PubMed

    Gapsys, Vytautas; de Groot, Bert L

    2017-12-12

    Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .

  13. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    NASA Astrophysics Data System (ADS)

    Joachimiak, Damian; Krzyślak, Piotr

    2015-06-01

    Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  14. Theoretical Evaluation of Electromagnetic Emissions from GSM900 Mobile Telephony Base Stations in the West Bank and Gaza Strip-Palestine.

    PubMed

    Lahham, Adnan; Alkbash, Jehad Abu; ALMasri, Hussien

    2017-04-20

    Theoretical assessments of power density in far-field conditions were used to evaluate the levels of environmental electromagnetic frequencies from selected GSM900 macrocell base stations in the West Bank and Gaza Strip. Assessments were based on calculating the power densities using commercially available software (RF-Map from Telstra Research Laboratories-Australia). Calculations were carried out for single base stations with multiantenna systems and also for multiple base stations with multiantenna systems at 1.7 m above the ground level. More than 100 power density levels were calculated at different locations around the investigated base stations. These locations include areas accessible to the general public (schools, parks, residential areas, streets and areas around kindergartens). The maximum calculated electromagnetic emission level resulted from a single site was 0.413 μW cm-2 and found at Hizma town near Jerusalem. Average maximum power density from all single sites was 0.16 μW cm-2. The results of all calculated power density levels in 100 locations distributed over the West Bank and Gaza were nearly normally distributed with a peak value of ~0.01% of the International Commission on Non-Ionizing Radiation Protection's limit recommended for general public. Comparison between calculated and experimentally measured value of maximum power density from a base station showed that calculations overestimate the actual measured power density by ~27%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Environment-based pin-power reconstruction method for homogeneous core calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less

  16. Model Comparisons For Space Solar Cell End-Of-Life Calculations

    NASA Astrophysics Data System (ADS)

    Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel

    2011-10-01

    Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.

  17. GIAO-DFT calculation of 15 N NMR chemical shifts of Schiff bases: Accuracy factors and protonation effects.

    PubMed

    Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B

    2018-02-09

    15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.

  18. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  19. Analysis of the bond-valence method for calculating (29) Si and (31) P magnetic shielding in covalent network solids.

    PubMed

    Holmes, Sean T; Alkan, Fahri; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2016-07-05

    (29) Si and (31) P magnetic-shielding tensors in covalent network solids have been evaluated using periodic and cluster-based calculations. The cluster-based computational methodology employs pseudoatoms to reduce the net charge (resulting from missing co-ordination on the terminal atoms) through valence modification of terminal atoms using bond-valence theory (VMTA/BV). The magnetic-shielding tensors computed with the VMTA/BV method are compared to magnetic-shielding tensors determined with the periodic GIPAW approach. The cluster-based all-electron calculations agree with experiment better than the GIPAW calculations, particularly for predicting absolute magnetic shielding and for predicting chemical shifts. The performance of the DFT functionals CA-PZ, PW91, PBE, rPBE, PBEsol, WC, and PBE0 are assessed for the prediction of (29) Si and (31) P magnetic-shielding constants. Calculations using the hybrid functional PBE0, in combination with the VMTA/BV approach, result in excellent agreement with experiment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Fast 3D dosimetric verifications based on an electronic portal imaging device using a GPU calculation engine.

    PubMed

    Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei

    2015-04-11

    To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.

  1. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy.

    PubMed

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-11-21

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.

  2. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  3. The influence of chemical mechanisms on PDF calculations of non-premixed turbulent flames

    NASA Astrophysics Data System (ADS)

    Pope, Stephen B.

    2005-11-01

    A series of calculations is reported of the Barlow & Frank non-premixed piloted jet flames D, E and F, with the aim of determining the level of description of the chemistry necessary to account accurately for the turbulence-chemistry interactions observed in these flames. The calculations are based on the modeled transport equation for the joint probability density function of velocity, turbulence frequency and composition (enthalpy and species mass fractions). Seven chemical mechanisms for methane are investigated, ranging from a five-step reduced mechanism to the 53-species GRI 3.0 mechanism. The results show that, for C-H-O species, accurate results are obtained with the GRI 2.11 and GRI 3.0 mechanisms, as well as with 12 and 15-step reduced mechanisms based on GRI 2.11. But significantly inaccurate calculations result from use of the 5-step reduced mechanism (based on GRI 2.11), and from two different 16-species skeletal mechanisms. As has previously been observed, GRI 3.0 over-predicts NO by up to a factor of two; whereas NO is calculated reasonably accurately by GRI 2.11 and the 15-step reduced mechanism.

  4. Calculation of conductivities and currents in the ionosphere

    NASA Technical Reports Server (NTRS)

    Kirchhoff, V. W. J. H.; Carpenter, L. A.

    1975-01-01

    Formulas and procedures to calculate ionospheric conductivities are summarized. Ionospheric currents are calculated using a semidiurnal E-region neutral wind model and electric fields from measurements at Millstone Hill. The results agree well with ground based magnetogram records for magnetic quiet days.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Liu, B; Liang, B

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less

  6. The development of android - based children's nutritional status monitoring system

    NASA Astrophysics Data System (ADS)

    Suryanto, Agus; Paramita, Octavianti; Pribadi, Feddy Setio

    2017-03-01

    The calculation of BMI (Body Mass Index) is one of the methods to calculate the nutritional status of a person. The BMI calculation has not yet widely understood and known by the public. In addition, people should know the importance of progress in the development of child nutrition each month. Therefore, an application to determine the nutritional status of children based on Android was developed in this study. This study restricted the calculation for children with the age of 0-60 months. The application can run on a smartphone or tablet PC with android operating system due to the rapid development of a smartphone or tablet PC with android operating system and many people own and use it. The aim of this study was to produce a android app to calculate of nutritional status of children. This study was Research and Development (R & D), with a design approach using experimental studies. The steps in this study included analyzing the formula of the Body Mass Index (BMI) and developing the initial application with the help of a computer that includes the design and manufacture of display using Eclipse software. This study resulted in android application that can be used to calculate the nutritional status of children with the age 0-60 months. The results of MES or the error calculation analysis using body mass index formula was 0. In addition, the results of MAPE percentage was 0%. It shows that there is no error in the calculation of the application based on the BMI formula. The smaller value of MSE and MAPE leads to higher level of accuracy.

  7. On the binding of indeno[1,2-c]isoquinolines in the DNA-topoisomerase I cleavage complex.

    PubMed

    Xiao, Xiangshu; Antony, Smitha; Pommier, Yves; Cushman, Mark

    2005-05-05

    An ab initio quantum mechanics calculation is reported which predicts the orientation of indenoisoquinoline 4 in the ternary cleavage complex formed from DNA and topoisomerase I (top1). The results of this calculation are consistent with the hypothetical structures previously proposed for the indenoisoquinoline-DNA-top1 ternary complexes based on molecular modeling, the crystal structure of a recently reported ternary complex, and the biological results obtained with a pair of diaminoalkyl-substituted indenoisoquinoline enantiomers. The results of these studies indicate that the pi-pi stacking interactions between the indenoisoquinolines and the neighboring DNA base pairs play a major role in determining binding orientation. The calculation of the electrostatic potential surface maps of the indenoisoquinolines and the adjacent DNA base pairs shows electrostatic complementarity in the observed binding orientation, leading to the conclusion that electrostatic attraction between the intercalators and the base pairs in the cleavage complex plays a major stabilizing role. On the other hand, the calculation of LUMO and HOMO energies of indenoisoquinoline 13b and neighboring DNA base pairs in conjunction with NBO analysis indicates that charge transfer complex formation plays a relatively minor role in stabilizing the ternary complexes derived from indenoisoquinolines, DNA, and top1. The results of these studies are important in understanding the existing structure-activity relationships for the indenoisoquinolines as top1 inhibitors and as anticancer agents, and they will be important in the future design of indenoisoquinoline-based top1 inhibitors.

  8. Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects

    NASA Astrophysics Data System (ADS)

    Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad

    2017-12-01

    One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.

  9. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  10. Ab-initio study on the absorption spectrum of color change sapphire based on first-principles calculations with considering lattice relaxation-effect

    NASA Astrophysics Data System (ADS)

    Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi

    2018-01-01

    In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.

  11. Application of Neural Network Technologies for Price Forecasting in the Liberalized Electricity Market

    NASA Astrophysics Data System (ADS)

    Gerikh, Valentin; Kolosok, Irina; Kurbatsky, Victor; Tomin, Nikita

    2009-01-01

    The paper presents the results of experimental studies concerning calculation of electricity prices in different price zones in Russia and Europe. The calculations are based on the intelligent software "ANAPRO" that implements the approaches based on the modern methods of data analysis and artificial intelligence technologies.

  12. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  13. [Raman, FTIR spectra and normal mode analysis of acetanilide].

    PubMed

    Liang, Hui-Qin; Tao, Ya-Ping; Han, Li-Gang; Han, Yun-Xia; Mo, Yu-Jun

    2012-10-01

    The Raman and FTIR spectra of acetanilide (ACN) were measured experimentally in the regions of 3 500-50 and 3 500-600 cm(-1) respectively. The equilibrium geometry and vibration frequencies of ACN were calculated based on density functional theory (DFT) method (B3LYP/6-311G(d, p)). The results showed that the theoretical calculation of molecular structure parameters are in good agreement with previous report and better than the ones calculated based on 6-31G(d), and the calculated frequencies agree well with the experimental ones. Potential energy distribution of each frequency was worked out by normal mode analysis, and based on this, a detailed and accurate vibration frequency assignment of ACN was obtained.

  14. Group additivity calculations of the thermodynamic properties of unfolded proteins in aqueous solution: a critical comparison of peptide-based and HKF models.

    PubMed

    Hakin, A W; Hedwig, G R

    2001-02-15

    A recent paper in this journal [Amend and Helgeson, Biophys. Chem. 84 (2000) 105] presented a new group additivity model to calculate various thermodynamic properties of unfolded proteins in aqueous solution. The parameters given for the revised Helgeson-Kirkham-Flowers (HKF) equations of state for all the constituent groups of unfolded proteins can be used, in principle, to calculate the partial molar heat capacity, C(o)p.2, and volume, V2(0), at infinite dilution of any polypeptide. Calculations of the values of C(o)p.2 and V2(0) for several polypeptides have been carried out to test the predictive utility of the HKF group additivity model. The results obtained are in very poor agreement with experimental data, and also with results calculated using a peptide-based group additivity model. A critical assessment of these two additivity models is presented.

  15. Neutron-induced reactions on AlF3 studied using the optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Lv, Cui-Juan; Zhang, Guo-Qiang; Wang, Hong-Wei; Zuo, Jia-Xu

    2015-08-01

    Neutron-induced reactions on 27Al and 19F nuclei are investigated using the optical model implemented in the TALYS 1.4 toolkit. Incident neutron energies in a wide range from 0.1 keV to 30 MeV are calculated. The cross sections for the main channels (n, np), (n, p), (n, α), (n, 2n), and (n, γ) and the total reaction cross section (n, tot) of the reactions are obtained. When the default parameters in TALYS 1.4 are adopted, the calculated results agree with the measured results. Based on the calculated results for the n + 27Al and n + 19F reactions, the results of the n + 27Al19F reactions are predicted. These results are useful both for the design of thorium-based molten salt reactors and for neutron activation analysis techniques.

  16. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  17. Computer programming for nucleic acid studies. II. Total chemical shifts calculation of all protons of double-stranded helices.

    PubMed

    Giessner-Prettre, C; Ribas Prado, F; Pullman, B; Kan, L; Kast, J R; Ts'o, P O

    1981-01-01

    A FORTRAN computer program called SHIFTS is described. Through SHIFTS, one can calculate the NMR chemical shifts of the proton resonances of single and double-stranded nucleic acids of known sequences and of predetermined conformations. The program can handle RNA and DNA for an arbitrary sequence of a set of 4 out of the 6 base types A,U,G,C,I and T. Data files for the geometrical parameters are available for A-, A'-, B-, D- and S-conformations. The positions of all the atoms are calculated using a modified version of the SEQ program [1]. Then, based on this defined geometry three chemical shift effects exerted by the atoms of the neighboring nucleotides on the protons of each monomeric unit are calculated separately: the ring current shielding effect: the local atomic magnetic susceptibility effect (including both diamagnetic and paramagnetic terms); and the polarization or electric field effect. Results of the program are compared with experimental results for a gamma (ApApGpCpUpU) 2 helical duplex and with calculated results on this same helix based on model building of A'-form and B-form and on graphical procedure for evaluating the ring current effects.

  18. Tree value system: description and assumptions.

    Treesearch

    D.G. Briggs

    1989-01-01

    TREEVAL is a microcomputer model that calculates tree or stand values and volumes based on product prices, manufacturing costs, and predicted product recovery. It was designed as an aid in evaluating management regimes. TREEVAL calculates values in either of two ways, one based on optimized tree bucking using dynamic programming and one simulating the results of user-...

  19. The Lα (λ = 121.6 nm) solar plage contrasts calculations.

    NASA Astrophysics Data System (ADS)

    Bruevich, E. A.

    1991-06-01

    The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.

  20. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  1. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  2. 39 CFR 3010.21 - Calculation of annual limitation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...

  3. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  4. 42 CFR 102.81 - Calculation of benefits for lost employment income.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...

  5. 42 CFR 102.81 - Calculation of benefits for lost employment income.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...

  6. 42 CFR 102.81 - Calculation of benefits for lost employment income.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...

  7. 42 CFR 102.81 - Calculation of benefits for lost employment income.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...

  8. 42 CFR 102.81 - Calculation of benefits for lost employment income.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... VACCINES SMALLPOX COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.81 Calculation of benefits... of work lost as a result of a covered injury or its health complications if the smallpox vaccine... based on the smallpox vaccine recipient or vaccinia contact's gross employment income, which includes...

  9. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  10. Electron transport in all-Heusler Co2CrSi/Cu2CrAl/Co2CrSi device, based on ab-initio NEGF calculations

    NASA Astrophysics Data System (ADS)

    Mikaeilzadeh, L.; Pirgholi, M.; Tavana, A.

    2018-05-01

    Based on the ab-initio non-equilibrium Green's function (NEGF) formalism based on the density functional theory (DFT), we have studied the electron transport in the all-Heusler device Co2CrSi/Cu2CrAl/Co2CrSi. Results show that the calculated transmission spectra is very sensitive to the structural parameters and the interface. Also, we obtain a range for the thickness of the spacer layer for which the MR effect is optimum. Calculations also show a perfect GMR effect in this device.

  11. Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2004-01-01

    We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  12. Theoretical and experimental NMR study of protopine hydrochloride isomers.

    PubMed

    Tousek, Jaromír; Malináková, Katerina; Dostál, Jirí; Marek, Radek

    2005-07-01

    The 1H and 13C NMR chemical shifts of cis- and trans-protopinium salts were measured and calculated. The calculations of the chemical shifts consisted of conformational analysis, geometry optimization (RHF/6-31G** method) and shielding constants calculations (B3LYP/6-31G** method). Based on the results of the quantum chemical calculations, two sets of experimental chemical shifts were assigned to the particular isomers. According to the experimental results, the trans-isomer is more stable and its population is approximately 68%. Copyright 2005 John Wiley & Sons, Ltd

  13. A modeling approach to account for toxicokinetic interactions in the calculation of biological hazard index for chemical mixtures.

    PubMed

    Haddad, S; Tardif, R; Viau, C; Krishnan, K

    1999-09-05

    Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.

  14. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    NASA Astrophysics Data System (ADS)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  15. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  16. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H; Barbee, D; Wang, W

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less

  17. Calculation of thermal expansion coefficient of glasses based on topological constraint theory

    NASA Astrophysics Data System (ADS)

    Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi

    2016-10-01

    In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.

  18. Fast calculation of the line-spread-function by transversal directions decoupling

    NASA Astrophysics Data System (ADS)

    Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra

    2016-07-01

    We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.

  19. Errors in the Calculation of 27Al Nuclear Magnetic Resonance Chemical Shifts

    PubMed Central

    Wang, Xianlong; Wang, Chengfei; Zhao, Hui

    2012-01-01

    Computational chemistry is an important tool for signal assignment of 27Al nuclear magnetic resonance spectra in order to elucidate the species of aluminum(III) in aqueous solutions. The accuracy of the popular theoretical models for computing the 27Al chemical shifts was evaluated by comparing the calculated and experimental chemical shifts in more than one hundred aluminum(III) complexes. In order to differentiate the error due to the chemical shielding tensor calculation from that due to the inadequacy of the molecular geometry prediction, single-crystal X-ray diffraction determined structures were used to build the isolated molecule models for calculating the chemical shifts. The results were compared with those obtained using the calculated geometries at the B3LYP/6-31G(d) level. The isotropic chemical shielding constants computed at different levels have strong linear correlations even though the absolute values differ in tens of ppm. The root-mean-square difference between the experimental chemical shifts and the calculated values is approximately 5 ppm for the calculations based on the X-ray structures, but more than 10 ppm for the calculations based on the computed geometries. The result indicates that the popular theoretical models are adequate in calculating the chemical shifts while an accurate molecular geometry is more critical. PMID:23203134

  20. Density functional theory calculations of III-N based semiconductors with mBJLDA

    NASA Astrophysics Data System (ADS)

    Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi

    2017-02-01

    In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.

  1. Symmetric Resonance Charge Exchange Cross Section Based on Impact Parameter Treatment

    NASA Technical Reports Server (NTRS)

    Omidvar, Kazem; Murphy, Kendrah; Atlas, Robert (Technical Monitor)

    2002-01-01

    Using a two-state impact parameter approximation, a calculation has been carried out to obtain symmetric resonance charge transfer cross sections between nine ions and their parent atoms or molecules. Calculation is based on a two-dimensional numerical integration. The method is mostly suited for hydrogenic and some closed shell atoms. Good agreement has been obtained with the results of laboratory measurements for the ion-atom pairs H+-H, He+-He, and Ar+-Ar. Several approximations in a similar published calculation have been eliminated.

  2. Properties of solid and gaseous hydrogen, based upon anisotropic pair interactions

    NASA Technical Reports Server (NTRS)

    Etters, R. D.; Danilowicz, R.; England, W.

    1975-01-01

    Properties of H2 are studied on the basis of an analytic anisotropic potential deduced from atomic orbital and perturbation calculations. The low-pressure solid results are based on a spherical average of the anisotropic potential. The ground state energy and the pressure-volume relation are calculated. The metal-insulator phase transition pressure is predicted. Second virial coefficients are calculated for H2 and D2, as is the difference in second virial coefficients between ortho and para H2 and D2.

  3. Comparative evaluation of hemodynamic and respiratory parameters during mechanical ventilation with two tidal volumes calculated by demi-span based height and measured height in normal lungs

    PubMed Central

    Seresht, L. Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad

    2014-01-01

    Background: Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). Materials and Methods: This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Results: Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Conclusions: Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation. PMID:24627845

  4. Theory study on the bandgap of antimonide-based multi-element alloys

    NASA Astrophysics Data System (ADS)

    An, Ning; Liu, Cheng-Zhi; Fan, Cun-Bo; Dong, Xue; Song, Qing-Li

    2017-05-01

    In order to meet the design requirements of the high-performance antimonide-based optoelectronic devices, the spin-orbit splitting correction method for bandgaps of Sb-based multi-element alloys is proposed. Based on the analysis of band structure, a correction factor is introduced in the InxGa1-xAsySb1-y bandgaps calculation with taking into account the spin-orbit coupling sufficiently. In addition, the InxGa1-xAsySb1-y films with different compositions are grown on GaSb substrates by molecular beam epitaxy (MBE), and the corresponding bandgaps are obtained by photoluminescence (PL) to test the accuracy and reliability of this new method. The results show that the calculated values agree fairly well with the experimental results. To further verify this new method, the bandgaps of a series of experimental samples reported before are calculated. The error rate analysis reveals that the α of spin-orbit splitting correction method is decreased to 2%, almost one order of magnitude smaller than the common method. It means this new method can calculate the antimonide multi-element more accurately and has the merit of wide applicability. This work can give a reasonable interpretation for the reported results and beneficial to tailor the antimonides properties and optoelectronic devices.

  5. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    PubMed

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  6. An Improved Method of Pose Estimation for Lighthouse Base Station Extension

    PubMed Central

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-01-01

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509

  7. Development of a quantum mechanics-based free-energy perturbation method: use in the calculation of relative solvation free energies.

    PubMed

    Reddy, M Rami; Singh, U C; Erion, Mark D

    2004-05-26

    Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.

  8. Calculations of dose distributions using a neural network model

    NASA Astrophysics Data System (ADS)

    Mathieu, R.; Martin, E.; Gschwind, R.; Makovicka, L.; Contassot-Vivier, S.; Bahi, J.

    2005-03-01

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journées Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  9. Calculations of dose distributions using a neural network model.

    PubMed

    Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J

    2005-03-07

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  10. Comparison of the Young-Laplace law and finite element based calculation of ventricular wall stress: implications for postinfarct and surgical ventricular remodeling.

    PubMed

    Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B

    2011-01-01

    Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Dosimetric comparison of helical tomotherapy treatment plans for total marrow irradiation created using GPU and CPU dose calculation engines.

    PubMed

    Nalichowski, Adrian; Burmeister, Jay

    2013-07-01

    To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) < 1 of the GPU plan resulted in over 90% of calculated voxels satisfying Γ < 1 criterion as compared to baseline CPU plan. The average number of voxels meeting the Γ < 1 criterion for all the plans was 97%. In terms of dose optimization/calculation efficiency, there was a 20-fold reduction in planning time with the new GPU system. The average optimization/dose calculation time utilizing the traditional CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation.

  12. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  13. Accurate Anharmonic Zero-Point Energies for Some Combustion-Related Species from Diffusion Monte Carlo.

    PubMed

    Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J

    2017-06-08

    Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).

  14. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE PAGES

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    2017-05-17

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  15. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. 77 FR 21529 - Freshwater Crawfish Tail Meat From the People's Republic of China: Final Results of Antidumping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... question, including when that rate is zero or de minimis.\\5\\ In this case, there is only one non-selected... calculations for one company. Therefore, the final results differ from the preliminary results. The final... not to calculate an all-others rate using any zero or de minimis margins or any margins based entirely...

  17. [Prenatal risk calculation: comparison between Fast Screen pre I plus software and ViewPoint software. Evaluation of the risk calculation algorithms].

    PubMed

    Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk

    2013-01-01

    The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.

  18. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  19. The modeler's influence on calculated solubilities for performance assessments at the Aspo Hard-rock Laboratory

    USGS Publications Warehouse

    Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.

  20. A medical image-based graphical platform -- features, applications and relevance for brachytherapy.

    PubMed

    Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank

    2014-01-01

    Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  1. Estimation of PV energy production based on satellite data

    NASA Astrophysics Data System (ADS)

    Mazurek, G.

    2015-09-01

    Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.

  2. Introduction of a method for presenting health-based impacts of the emission from products, based on emission measurements of materials used in manufacturing of the products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jørgensen, Rikke Bramming, E-mail: rikke.jorgensen@iot.ntnu.no

    A method for presenting the health impact of emissions from furniture is introduced, which could be used in the context of environmental product declarations. The health impact is described by the negative indoor air quality potential, the carcinogenic potential, the mutagenic and reprotoxic potential, the allergenic potential, and the toxicological potential. An experimental study of emissions from four pieces of furniture is performed by testing both the materials used for production of the furniture and the complete piece of furniture, in order to compare the results gained by adding emissions of material with results gained from testing the finished piecemore » of furniture. Calculating the emission from a product based on the emission from materials used in the manufacture of the product is a new idea. The relation between calculated results and measured results from the same products differ between the four pieces of furniture tested. Large differences between measured and calculated values are seen for leather products. More knowledge is needed to understand why these differences arise. Testing materials allows us to compare different suppliers of the same material. Four different foams and three different timber materials are tested, and the results vary between materials of the same type. If the manufacturer possesses this type of knowledge of the materials from the subcontractors it could be used as a selection criterion according to production of low emission products. -- Highlights: • A method for presenting health impact of emissions is introduced. • An experimental study of emissions from four pieces of furniture is performed. • Health impact is calculated based on sum of contribution from the materials used. • Calculated health impact is compared to health impact of the manufactured product. • The results show that health impact could be useful in product development and for presentation in EPDs.« less

  3. Analytical torque calculation and experimental verification of synchronous permanent magnet couplings with Halbach arrays

    NASA Astrophysics Data System (ADS)

    Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young

    2018-05-01

    This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.

  4. Dose equivalent rate constants and barrier transmission data for nuclear medicine facility dose calculations and shielding design.

    PubMed

    Kusano, Maggie; Caldwell, Curtis B

    2014-07-01

    A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.

  5. Supporting calculations and assumptions for use in WESF safetyanalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hey, B.E.

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  6. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  7. Synthesis, tautomeric stability, spectroscopy and computational study of a potential molecular switch of (Z)-4-(phenylamino)pent-3-en-2-one

    NASA Astrophysics Data System (ADS)

    Fahid, Farzaneh; Kanaani, Ayoub; Pourmousavi, Seied Ali; Ajloo, Davood

    2017-04-01

    The (Z)-4-(phenylamino) pent-3-en-2-one (PAPO) was synthesised applying carbon-based solid acid and described by experimental techniques. Calculated results reveal that its keto-amine form is more stable than its enol-imine form. A relaxed potential energy surface scan has been accomplished based on the optimised geometry of NH tautomeric form to depict the potential energy barrier related to intramolecular proton transfer. The spectroscopic results and theoretical calculations demonstrate that the intramolecular hydrogen bonding strength of PAPO is stronger than that in 4-amino-3-penten-2-one)APO(. In addition, molecular electrostatic potential, total and partial density of stats (TDOS, PDOS) and non-linear optical properties of the compound were studied using same theoretical calculations. Our calculations show that the title molecule has the potential to be used as molecular switch.

  8. Theoretical prediction of the band offsets at the ZnO/anatase TiO2 and GaN/ZnO heterojunctions using the self-consistent ab initio DFT/GGA-1/2 method.

    PubMed

    Fang, D Q; Zhang, S L

    2016-01-07

    The band offsets of the ZnO/anatase TiO2 and GaN/ZnO heterojunctions are calculated using the density functional theory/generalized gradient approximation (DFT/GGA)-1/2 method, which takes into account the self-energy corrections and can give an approximate description to the quasiparticle characteristics of the electronic structure of semiconductors. We present the results of the ionization potential (IP)-based and interfacial offset-based band alignments. In the interfacial offset-based band alignment, to get the natural band offset, we use the surface calculations to estimate the change of reference level due to the interfacial strain. Based on the interface models and GGA-1/2 calculations, we find that the valence band maximum and conduction band minimum of ZnO, respectively, lie 0.64 eV and 0.57 eV above those of anatase TiO2, while lie 0.84 eV and 1.09 eV below those of GaN, which agree well with the experimental data. However, a large discrepancy exists between the IP-based band offset and the calculated natural band offset, the mechanism of which is discussed. Our results clarify band alignment of the ZnO/anatase TiO2 heterojunction and show good agreement with the GW calculations for the GaN/ZnO heterojunction.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jing-Jy; Flood, Paul E.; LePoire, David

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less

  10. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  11. An Adaptive Nonlinear Basal-Bolus Calculator for Patients With Type 1 Diabetes

    PubMed Central

    Boiroux, Dimitri; Aradóttir, Tinna Björk; Nørgaard, Kirsten; Poulsen, Niels Kjølstad; Madsen, Henrik; Jørgensen, John Bagterp

    2016-01-01

    Background: Bolus calculators help patients with type 1 diabetes to mitigate the effect of meals on their blood glucose by administering a large amount of insulin at mealtime. Intraindividual changes in patients physiology and nonlinearity in insulin-glucose dynamics pose a challenge to the accuracy of such calculators. Method: We propose a method based on a continuous-discrete unscented Kalman filter to continuously track the postprandial glucose dynamics and the insulin sensitivity. We augment the Medtronic Virtual Patient (MVP) model to simulate noise-corrupted data from a continuous glucose monitor (CGM). The basal rate is determined by calculating the steady state of the model and is adjusted once a day before breakfast. The bolus size is determined by optimizing the postprandial glucose values based on an estimate of the insulin sensitivity and states, as well as the announced meal size. Following meal announcements, the meal compartment and the meal time constant are estimated, otherwise insulin sensitivity is estimated. Results: We compare the performance of a conventional linear bolus calculator with the proposed bolus calculator. The proposed basal-bolus calculator significantly improves the time spent in glucose target (P < .01) compared to the conventional bolus calculator. Conclusion: An adaptive nonlinear basal-bolus calculator can efficiently compensate for physiological changes. Further clinical studies will be needed to validate the results. PMID:27613658

  12. Development and testing of a European Union-wide farm-level carbon calculator

    PubMed Central

    Tuomisto, Hanna L; De Camillis, Camillo; Leip, Adrian; Nisini, Luigi; Pelletier, Nathan; Haastrup, Palle

    2015-01-01

    Direct greenhouse gas (GHG) emissions from agriculture accounted for approximately 10% of total European Union (EU) emissions in 2010. To reduce farming-related GHG emissions, appropriate policy measures and supporting tools for promoting low-C farming practices may be efficacious. This article presents the methodology and testing results of a new EU-wide, farm-level C footprint calculator. The Carbon Calculator quantifies GHG emissions based on international standards and technical specifications on Life Cycle Assessment (LCA) and C footprinting. The tool delivers its results both at the farm level and as allocated to up to 5 main products of the farm. In addition to the quantification of GHG emissions, the calculator proposes mitigation options and sequestration actions that may be suitable for individual farms. The results obtained during a survey made on 54 farms from 8 EU Member States are presented. These farms were selected in view of representing the diversity of farm types across different environmental zones in the EU. The results of the C footprint of products in the data set show wide range of variation between minimum and maximum values. The results of the mitigation actions showed that the tool can help identify practices that can lead to substantial emission reductions. To avoid burden-shifting from climate change to other environmental issues, the future improvements of the tool should include incorporation of other environmental impact categories in place of solely focusing on GHG emissions. Integr Environ Assess Manag 2015;11:404–416. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of SETAC. Key Points The methodology and testing results of a new European Union-wide, farm-level carbon calculator are presented. The Carbon Calculator reports life cycle assessment-based greenhouse gas emissions at farm and product levels and recommends farm- specific mitigation actions. Based on the results obtained from testing the tool in 54 farms in 8 European countries, it was found that the product-level carbon footprint results are comparable with those of other studies focusing on similar products. The results of the mitigation actions showed that the tool can help identify practices that can lead to substantial emission reductions. PMID:25655187

  13. The segmentation of Thangka damaged regions based on the local distinction

    NASA Astrophysics Data System (ADS)

    Xuehui, Bi; Huaming, Liu; Xiuyou, Wang; Weilan, Wang; Yashuai, Yang

    2017-01-01

    Damaged regions must be segmented before digital repairing Thangka cultural relics. A new segmentation algorithm based on local distinction is proposed for segmenting damaged regions, taking into account some of the damaged area with a transition zone feature, as well as the difference between the damaged regions and their surrounding regions, combining local gray value, local complexity and local definition-complexity (LDC). Firstly, calculate the local complexity and normalized; secondly, calculate the local definition-complexity and normalized; thirdly, calculate the local distinction; finally, set the threshold to segment local distinction image, remove the over segmentation, and get the final segmentation result. The experimental results show that our algorithm is effective, and it can segment the damaged frescoes and natural image etc.

  14. VHDL-AMS modelling and simulation of a planar electrostatic micromotor

    NASA Astrophysics Data System (ADS)

    Endemaño, A.; Fourniols, J. Y.; Camon, H.; Marchese, A.; Muratet, S.; Bony, F.; Dunnigan, M.; Desmulliez, M. P. Y.; Overton, G.

    2003-09-01

    System level simulation results of a planar electrostatic micromotor, based on analytical models of the static and dynamic torque behaviours, are presented. A planar variable capacitance (VC) electrostatic micromotor designed, fabricated and tested at LAAS (Toulouse) in 1995 is simulated using the high level language VHDL-AMS (VHSIC (very high speed integrated circuits) hardware description language-analog mixed signal). The analytical torque model is obtained by first calculating the overlaps and capacitances between different electrodes based on a conformal mapping transformation. Capacitance values in the order of 10-16 F and torque values in the order of 10-11 N m have been calculated in agreement with previous measurements and simulations from this type of motor. A dynamic model has been developed for the motor by calculating the inertia coefficient and estimating the friction-coefficient-based values calculated previously for other similar devices. Starting voltage results obtained from experimental measurement are in good agreement with our proposed simulation model. Simulation results of starting voltage values, step response, switching response and continuous operation of the micromotor, based on the dynamic model of the torque, are also presented. Four VHDL-AMS blocks were created, validated and simulated for power supply, excitation control, micromotor torque creation and micromotor dynamics. These blocks can be considered as the initial phase towards the creation of intellectual property (IP) blocks for microsystems in general and electrostatic micromotors in particular.

  15. Theoretical prediction of the electronic transport properties of the Al-Cu alloys based on the first-principle calculation and Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Choi, Garam; Lee, Won Bo

    Metal alloys, especially Al-based, are commonly-used materials for various industrial applications. In this paper, the Al-Cu alloys with varying the Al-Cu ratio were investigated based on the first-principle calculation using density functional theory. And the electronic transport properties of the Al-Cu alloys were carried out using Boltzmann transport theory. From the results, the transport properties decrease with Cu-containing ratio at the temperature from moderate to high, but with non-linearity. It is inferred by various scattering effects from the calculation results with relaxation time approximation. For the Al-Cu alloy system, where it is hard to find the reliable experimental data for various alloys, it supports understanding and expectation for the thermal electrical properties from the theoretical prediction. Theoretical and computational soft matters laboratory.

  16. A theory for the fracture of thin plates subjected to bending and twisting moments

    NASA Technical Reports Server (NTRS)

    Hui, C. Y.; Zehnder, Alan T.

    1993-01-01

    Stress fields near the tip of a through crack in an elastic plate under bending and twisting moments are reviewed assuming both Kirchhoff and Reissner plate theories. The crack tip displacement and rotation fields based on the Reissner theory are calculated. These results are used to calculate the J-integral (energy release rate) for both Kirchhoff and Reissner plate theories. Invoking Simmonds and Duva's (1981) result that the value of the J-integral based on either theory is the same for thin plates, a universal relationship between the Kirchhoff theory stress intensity factors and the Reissner theory stress intensity factors is obtained for thin plates. Calculation of Kirchhoff theory stress intensity factors from finite elements based on energy release rate is illustrated. It is proposed that, for thin plates, fracture toughness and crack growth rates be correlated with the Kirchhoff theory stress intensity factors.

  17. An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks

    NASA Astrophysics Data System (ADS)

    Zhao, Peng-yuan; Huang, Xiao-ping

    2018-03-01

    Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.

  18. Monte Carlo dose calculations for high-dose-rate brachytherapy using GPU-accelerated processing.

    PubMed

    Tian, Z; Zhang, M; Hrycushko, B; Albuquerque, K; Jiang, S B; Jia, X

    2016-01-01

    Current clinical brachytherapy dose calculations are typically based on the Association of American Physicists in Medicine Task Group report 43 (TG-43) guidelines, which approximate patient geometry as an infinitely large water phantom. This ignores patient and applicator geometries and heterogeneities, causing dosimetric errors. Although Monte Carlo (MC) dose calculation is commonly recognized as the most accurate method, its associated long computational time is a major bottleneck for routine clinical applications. This article presents our recent developments of a fast MC dose calculation package for high-dose-rate (HDR) brachytherapy, gBMC, built on a graphics processing unit (GPU) platform. gBMC-simulated photon transport in voxelized geometry with physics in (192)Ir HDR brachytherapy energy range considered. A phase-space file was used as a source model. GPU-based parallel computation was used to simultaneously transport multiple photons, one on a GPU thread. We validated gBMC by comparing the dose calculation results in water with that computed TG-43. We also studied heterogeneous phantom cases and a patient case and compared gBMC results with Acuros BV results. Radial dose function in water calculated by gBMC showed <0.6% relative difference from that of the TG-43 data. Difference in anisotropy function was <1%. In two heterogeneous slab phantoms and one shielded cylinder applicator case, average dose discrepancy between gBMC and Acuros BV was <0.87%. For a tandem and ovoid patient case, good agreement between gBMC and Acruos BV results was observed in both isodose lines and dose-volume histograms. In terms of the efficiency, it took ∼47.5 seconds for gBMC to reach 0.15% statistical uncertainty within the 5% isodose line for the patient case. The accuracy and efficiency of a new GPU-based MC dose calculation package, gBMC, for HDR brachytherapy make it attractive for clinical applications. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  20. Two- and three-photon ionization of hydrogen and lithium

    NASA Technical Reports Server (NTRS)

    Chang, T. N.; Poe, R. T.

    1977-01-01

    We present the detailed result of a calculation on two- and three-photon ionization of hydrogen and lithium based on a recently proposed calculational method. Our calculation has demonstrated that this method is capable of retaining the numerical advantages enjoyed by most of the existing calculational methods and, at the same time, circumventing their limitations. In particular, we have concentrated our discussion on the relative contribution from the resonant and nonresonant intermediate states.

  1. Three-Dimensional Electron Beam Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Shiu, Almon Sowchee

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.

  2. Models construction for acetone-butanol-ethanol fermentations with acetate/butyrate consecutively feeding by graph theory.

    PubMed

    Li, Zhigang; Shi, Zhongping; Li, Xin

    2014-05-01

    Several fermentations with consecutively feeding of acetate/butyrate were conducted in a 7 L fermentor and the results indicated that exogenous acetate/butyrate enhanced solvents productivities by 47.1% and 39.2% respectively, and changed butyrate/acetate ratios greatly. Then extracellular butyrate/acetate ratios were utilized for calculation of acids rates and the results revealed that acetate and butyrate formation pathways were almost blocked by corresponding acids feeding. In addition, models for acetate/butyrate feeding fermentations were constructed by graph theory based on calculation results and relevant reports. Solvents concentrations and butanol/acetone ratios of these fermentations were also calculated and the results of models calculation matched fermentation data accurately which demonstrated that models were constructed in a reasonable way. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Calculation of the surface tension of liquid Ga-based alloys

    NASA Astrophysics Data System (ADS)

    Dogan, Ali; Arslan, Hüseyin

    2018-05-01

    As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.

  4. WE-AB-BRA-06: 4DCT-Ventilation: A Novel Imaging Modality for Thoracic Surgical Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Y; Jackson, M; Schubert, L

    Purpose: The current standard-of-care imaging used to evaluate lung cancer patients for surgical resection is nuclear-medicine ventilation. Surgeons use nuclear-medicine images along with pulmonary function tests (PFT) to calculate percent predicted postoperative (%PPO) PFT values by estimating the amount of functioning lung that would be lost with surgery. 4DCT-ventilation is an emerging imaging modality developed in radiation oncology that uses 4DCT data to calculate lung ventilation maps. We perform the first retrospective study to assess the use of 4DCT-ventilation for pre-operative surgical evaluation. The purpose of this work was to compare %PPO-PFT values calculated with 4DCT-ventilation and nuclear-medicine imaging. Methods:more » 16 lung cancer patients retrospectively reviewed had undergone 4DCTs, nuclear-medicine imaging, and had Forced Expiratory Volume in 1 second (FEV1) acquired as part of a standard PFT. For each patient, 4DCT data sets, spatial registration, and a density-change based model were used to compute 4DCT-ventilation maps. Both 4DCT and nuclear-medicine images were used to calculate %PPO-FEV1 using %PPO-FEV1=pre-operative FEV1*(1-fraction of total ventilation of resected lung). Fraction of ventilation resected was calculated assuming lobectomy and pneumonectomy. The %PPO-FEV1 values were compared between the 4DCT-ventilation-based calculations and the nuclear-medicine-based calculations using correlation coefficients and average differences. Results: The correlation between %PPO-FEV1 values calculated with 4DCT-ventilation and nuclear-medicine were 0.81 (p<0.01) and 0.99 (p<0.01) for pneumonectomy and lobectomy respectively. The average difference between the 4DCT-ventilation based and the nuclear-medicine-based %PPO-FEV1 values were small, 4.1±8.5% and 2.9±3.0% for pneumonectomy and lobectomy respectively. Conclusion: The high correlation results provide a strong rationale for a clinical trial translating 4DCT-ventilation to the surgical domain. Compared to nuclear-medicine, 4DCT-ventilation is cheaper, does not require a radioactive contrast agent, provides a faster imaging procedure, and has improved spatial resolution. 4DCT-ventilation can reduce the cost and imaging time for patients while providing improved spatial accuracy and quantitative results for surgeons. YV discloses grant from State of Colorado.« less

  5. Consideration of the respiratory cycle asymmetry in the numerical modeling of the submicron particles deposition in the human nasal cavity

    NASA Astrophysics Data System (ADS)

    Ganimedov, V. L.; Muchnaya, M. I.

    2017-10-01

    A detailed study of the behavior of the U-shaped curve was conducted, which described deposition efficiency of inhaled particles in human nasal cavity. The particles in the range from 1 nm to 20 µm are considered. Calculations of air flow and particles deposition were carried out for symmetrical (idealized) and asymmetrical (real) breathing cycles at the same volume of inhaled air, which corresponded to calm breathing. The calculations were performed on the base of the mathematical model of the nasal cavity of healthy person using software package ANSYS (FLUENT 12). The comparison of the results was made between these calculations, and also with the results obtained at quasi-stationary statement of the problem for several values of flow rate. The comparison of the results of quasi-stationary calculations with available calculated and experimental data (in vivo i in vitro) was fulfilled previously. Good agreement of the results was obtained. It is shown that the real distribution of deposition efficiency as a function of the particle size can be obtained via a certain combination of the results of quasi-stationary calculations, without the use of laborious and time-consuming non-stationary calculation.

  6. Modeling of Pressure Drop During Refrigerant Condensation in Pipe Minichannels

    NASA Astrophysics Data System (ADS)

    Sikora, Małgorzata; Bohdal, Tadeusz

    2017-12-01

    Investigations of refrigerant condensation in pipe minichannels are very challenging and complicated issue. Due to the multitude of influences very important is mathematical and computer modeling. Its allows for performing calculations for many different refrigerants under different flow conditions. A large number of experimental results published in the literature allows for experimental verification of correctness of the models. In this work is presented a mathematical model for calculation of flow resistance during condensation of refrigerants in the pipe minichannel. The model was developed in environment based on conservation equations. The results of calculations were verified by authors own experimental investigations results.

  7. Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.

    PubMed

    Hung, Chung-Wen; Hsu, Wen-Ting

    2018-05-23

    Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.

  8. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  9. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  10. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, J; Park, J; Kim, L

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less

  11. Internal twisting motion dependent conductance of an aperiodic DNA molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiliyanti, Vandan, E-mail: vandan.wiliyanti@ui.ac.id; Yudiarsah, Efta

    The influence of internal twisting motion of base-pair on conductance of an aperiodic DNA molecule has been studied. Double-stranded DNA molecule with sequence GCTAGTACGTGACGTAGCTAGGATATGCCTGA on one chain and its complement on the other chain is used. The molecule is modeled using Hamiltonian Tight Binding, in which the effect of twisting motion on base onsite energy and between bases electron hopping constant was taking into account. Semi-empirical theory of Slater-Koster is employed in bringing the twisting motion effect on the hopping constants. In addition to the ability to hop from one base to other base, electron can also hop from amore » base to sugar-phosphate backbone and vice versa. The current flowing through DNA molecule is calculated using Landauer–Büttiker formula from transmission probability, which is calculated using transfer matrix technique and scattering matrix method, simultaneously. Then, the differential conductance is calculated from the I-V curve. The calculation result shows at some region of voltages, the conductance increases as the frequency increases, but in other region it decreases with the frequency.« less

  12. Project Echo: System Calculations

    NASA Technical Reports Server (NTRS)

    Ruthroff, Clyde L.; Jakes, William C., Jr.

    1961-01-01

    The primary experimental objective of Project Echo was the transmission of radio communications between points on the earth by reflection from the balloon satellite. This paper describes system calculations made in preparation for the experiment and their adaptation to the problem of interpreting the results. The calculations include path loss computations, expected audio signal-to-noise ratios, and received signal strength based on orbital parameters.

  13. Energy Efficiency of Induction Motors Running Off Frequency Converters with Pulse-Width Voltage Modulation{sup 1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shvetsov, N. K., E-mail: elmash@em.ispu.ru

    2016-11-15

    The results of calculations of the increase in losses in an induction motor with frequency control and different forms of the supply voltage are presented. The calculations were performed by an analytic method based on harmonic analysis of the supply voltage as well as numerical calculation of the electromagnetic processes by the finite-element method.

  14. Lead optimization mapper: automating free energy calculations for lead optimization.

    PubMed

    Liu, Shuai; Wu, Yujie; Lin, Teng; Abel, Robert; Redmann, Jonathan P; Summa, Christopher M; Jaber, Vivian R; Lim, Nathan M; Mobley, David L

    2013-09-01

    Alchemical free energy calculations hold increasing promise as an aid to drug discovery efforts. However, applications of these techniques in discovery projects have been relatively few, partly because of the difficulty of planning and setting up calculations. Here, we introduce lead optimization mapper, LOMAP, an automated algorithm to plan efficient relative free energy calculations between potential ligands within a substantial library of perhaps hundreds of compounds. In this approach, ligands are first grouped by structural similarity primarily based on the size of a (loosely defined) maximal common substructure, and then calculations are planned within and between sets of structurally related compounds. An emphasis is placed on ensuring that relative free energies can be obtained between any pair of compounds without combining the results of too many different relative free energy calculations (to avoid accumulation of error) and by providing some redundancy to allow for the possibility of error and consistency checking and provide some insight into when results can be expected to be unreliable. The algorithm is discussed in detail and a Python implementation, based on both Schrödinger's and OpenEye's APIs, has been made available freely under the BSD license.

  15. Water dissociating on rigid Ni(100): A quantum dynamics study on a full-dimensional potential energy surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Chen, Jun; Zhang, Zhaojun; Shen, Xiangjian; Fu, Bina; Zhang, Dong H.

    2018-04-01

    We constructed a nine-dimensional (9D) potential energy surface (PES) for the dissociative chemisorption of H2O on a rigid Ni(100) surface using the neural network method based on roughly 110 000 energies obtained from extensive density functional theory (DFT) calculations. The resulting PES is accurate and smooth, based on the small fitting errors and the good agreement between the fitted PES and the direct DFT calculations. Time dependent wave packet calculations also showed that the PES is very well converged with respect to the fitting procedure. The dissociation probabilities of H2O initially in the ground rovibrational state from 9D quantum dynamics calculations are quite different from the site-specific results from the seven-dimensional (7D) calculations, indicating the importance of full-dimensional quantum dynamics to quantitatively characterize this gas-surface reaction. It is found that the validity of the site-averaging approximation with exact potential holds well, where the site-averaging dissociation probability over 15 fixed impact sites obtained from 7D quantum dynamics calculations can accurately approximate the 9D dissociation probability for H2O in the ground rovibrational state.

  16. Quantum chemical determination of Young's modulus of lignin. Calculations on a beta-O-4' model compound.

    PubMed

    Elder, Thomas

    2007-11-01

    The calculation of Young's modulus of lignin has been examined by subjecting a dimeric model compound to strain, coupled with the determination of energy and stress. The computational results, derived from quantum chemical calculations, are in agreement with available experimental results. Changes in geometry indicate that modifications in dihedral angles occur in response to linear strain. At larger levels of strain, bond rupture is evidenced by abrupt changes in energy, structure, and charge. Based on the current calculations, the bond scission may be occurring through a homolytic reaction between aliphatic carbon atoms. These results may have implications in the reactivity of lignin especially when subjected to processing methods that place large mechanical forces on the structure.

  17. An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines

    NASA Technical Reports Server (NTRS)

    Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng

    2014-01-01

    We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.

  18. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    NASA Astrophysics Data System (ADS)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  19. Synthesis, spectroscopic characterization, DFT studies and antifungal activity of (E)-4-amino-5-[N'-(2-nitro-benzylidene)-hydrazino]-2,4-dihydro-[1,2,4]triazole-3-thione

    NASA Astrophysics Data System (ADS)

    Joshi, Rachana; Pandey, Nidhi; Yadav, Swatantra Kumar; Tilak, Ragini; Mishra, Hirdyesh; Pokharia, Sandeep

    2018-07-01

    The hydrazino Schiff base (E)-4-amino-5-[N'-(2-nitro-benzylidene)-hydrazino]-2,4-dihydro-[1,2,4]triazole-3-thione was synthesized and structurally characterized by elemental analysis, FT-IR, Raman, 1H and 13C-NMR and UV-Vis studies. A density functional theory (DFT) based electronic structure calculations were accomplished at B3LYP/6-311++G(d,p) level of theory. A comparative analysis of calculated vibrational frequencies with experimental vibrational frequencies was carried out and significant bands were assigned. The results indicate a good correlation (R2 = 0.9974) between experimental and theoretical IR frequencies. The experimental 1H and 13C-NMR resonance signals were also compared to the calculated values. The theoretical UV-Vis spectral studies were carried out using time dependent-DFT method in gas phase and IEFPCM model in solvent field calculation. The geometrical parameters were calculated in the gas phase. Atomic charges at selected atoms were calculated by Mulliken population analysis (MPA), Hirshfeld population analysis (HPA) and Natural population analysis (NPA) schemes. The molecular electrostatic potential (MEP) map was calculated to assign reactive site on the surface of the molecule. The conceptual-DFT based global and local reactivity descriptors were calculated to obtain an insight into the reactivity behaviour. The frontier molecular orbital analysis was carried out to study the charge transfer within the molecule. The detailed natural bond orbital (NBO) analysis was performed to obtain an insight into the intramolecular conjugative electronic interactions. The titled compound was screened for in vitro antifungal activity against four fungal strains and the results obtained are explained through in silico molecular docking studies.

  20. The effects of variations in parameters and algorithm choices on calculated radiomics feature values: initial investigations and comparisons to feature variability across CT image acquisition conditions

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Wahi-Anwar, Muhammad; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael

    2018-02-01

    Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.

  1. Calculation of Hugoniot properties for shocked nitromethane based on the improved Tsien's EOS

    NASA Astrophysics Data System (ADS)

    Zhao, Bo; Cui, Ji-Ping; Fan, Jing

    2010-06-01

    We have calculated the Hugoniot properties of shocked nitromethane based on the improved Tsien’s equation of state (EOS) that optimized by “exact” numerical molecular dynamic data at high temperatures and pressures. Comparison of the calculated results of the improved Tsien’s EOS with the existed experimental data and the direct simulations show that the behavior of the improved Tsien’s EOS is very good in many aspects. Because of its simple analytical form, the improved Tsien’s EOS can be prospectively used to study the condensed explosive detonation coupling with chemical reaction.

  2. Effect of costing methods on unit cost of hospital medical services.

    PubMed

    Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya

    2007-04-01

    To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.

  3. Calculation of nuclear spin-spin coupling constants using frozen density embedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Götz, Andreas W., E-mail: agoetz@sdsc.edu; Autschbach, Jochen; Visscher, Lucas, E-mail: visscher@chem.vu.nl

    2014-03-14

    We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects inmore » the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.« less

  4. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  5. Beta decay rates of neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel

    2015-10-01

    Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.

  6. A new self-shielding method based on a detailed cross-section representation in the resolved energy domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saygin, H.; Hebert, A.

    The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less

  7. Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen

    2018-01-01

    This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.

  8. Preliminary calculations related to the accident at Three Mile Island

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchner, W.L.; Stevenson, M.G.

    This report discusses preliminary studies of the Three Mile Island Unit 2 (TMI-2) accident based on available methods and data. The work reported includes: (1) a TRAC base case calculation out to 3 hours into the accident sequence; (2) TRAC parametric calculations, these are the same as the base case except for a single hypothetical change in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident; (3) fuel rod cladding failure, cladding oxidation due to zirconium metal-steam reactions, hydrogen release due to cladding oxidation, cladding ballooning, cladding embrittlement,more » and subsequent cladding breakup estimates based on TRAC calculated cladding temperatures and system pressures. Some conclusions of this work are: the TRAC base case accident calculation agrees very well with known system conditions to nearly 3 hours into the accident; the parametric calculations indicate that, loss-of-core cooling was most influenced by the throttling of High-Pressure Injection (HPI) flows, given the accident initiating events and the pressurizer electromagnetic-operated valve (EMOV) failing to close as designed; failure of nearly all the rods and gaseous fission product gas release from the failed rods is predicted to have occurred at about 2 hours and 30 minutes; cladding oxidation (zirconium-steam reaction) up to 3 hours resulted in the production of approximately 40 kilograms of hydrogen.« less

  9. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  10. SU-E-T-632: Preliminary Study On Treating Nose Skin Using Energy and Intensity Modulated Electron Beams with Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, L; Eldib, A; Li, J

    Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reducemore » the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin.« less

  11. Models for the Configuration and Integrity of Partially Oxidized Fuel Rod Cladding at High Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siefken, L.J.

    1999-01-01

    Models were designed to resolve deficiencies in the SCDAP/RELAP5/MOD3.2 calculations of the configuration and integrity of hot, partially oxidized cladding. These models are expected to improve the calculations of several important aspects of fuel rod behavior. First, an improved mapping was established from a compilation of PIE results from severe fuel damage tests of the configuration of melted metallic cladding that is retained by an oxide layer. The improved mapping accounts for the relocation of melted cladding in the circumferential direction. Then, rules based on PIE results were established for calculating the effect of cladding that has relocated from abovemore » on the oxidation and integrity of the lower intact cladding upon which it solidifies. Next, three different methods were identified for calculating the extent of dissolution of the oxidic part of the cladding due to its contact with the metallic part. The extent of dissolution effects the stress and thus the integrity of the oxidic part of the cladding. Then, an empirical equation was presented for calculating the stress in the oxidic part of the cladding and evaluating its integrity based on this calculated stress. This empirical equation replaces the current criterion for loss of integrity which is based on temperature and extent of oxidation. Finally, a new rule based on theoretical and experimental results was established for identifying the regions of a fuel rod with oxidation of both the inside and outside surfaces of the cladding. The implementation of these models is expected to eliminate the tendency of the SCDAP/RELAP5 code to overpredict the extent of oxidation of the upper part of fuel rods and to underpredict the extent of oxidation of the lower part of fuel rods and the part with a high concentration of relocated material. This report is a revision and reissue of the report entitled, Improvements in Modeling of Cladding Oxidation and Meltdown.« less

  12. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services

    PubMed Central

    Rajabi, A; Dabiri, A

    2012-01-01

    Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171

  13. Computational Model of D-Region Ion Production Caused by Energetic Electron Precipitations Based on General Monte Carlo Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.

    2017-12-01

    During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.

  14. An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet

    NASA Astrophysics Data System (ADS)

    Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon

    2015-08-01

    The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.

  15. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE PAGES

    Follett, R. K.; Edgell, D. H.; Froula, D. H.; ...

    2017-10-20

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  16. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K.; Edgell, D. H.; Froula, D. H.

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  17. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  18. 77 FR 61738 - Circular Welded Carbon Steel Pipes and Tubes From Thailand: Final Results of Antidumping Duty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ... calculation program which affects how we calculate the freight revenue cap. See Decision Memorandum at Comment... (or customer-specific) ad valorem assessment rates based on the ratio of the total amount of the... [[Page 61739

  19. Interactive data based on Apriori - AHP - C4.5 results assessment method

    NASA Astrophysics Data System (ADS)

    Zhao, Quan; Zhang, Li

    2017-05-01

    AHP method for weight calculation method, will introduce the subjective concept of "experts, supposed steps", for the objective result has certain uncertainty, causes the classroom interaction data attribute weights proportion difference is not big, the whole class achievement trend of convergence, introduce the concept of Apriori-AHP. C4.5 is used to calculate the weight of attribute column, and then using the Apriori-AHP algorithm calculate attribute weights, attribute importance weights on judgment performance indicators table overall consideration, with the weight of index table of gifted student achievement, make the class performance trends to fluctuate, have tended to be "standard" real results for teacher reference.

  20. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  1. Configurations of base-pair complexes in solutions. [nucleotide chemistry

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Nir, S.; Rein, R.; Macelroy, R.

    1978-01-01

    A theoretical search for the most stable conformations (i.e., stacked or hydrogen bonded) of the base pairs A-U and G-C in water, CCl4, and CHCl3 solutions is presented. The calculations of free energies indicate a significant role of the solvent in determining the conformations of the base-pair complexes. The application of the continuum method yields preferred conformations in good agreement with experiment. Results of the calculations with this method emphasize the importance of both the electrostatic interactions between the two bases in a complex, and the dipolar interaction of the complex with the entire medium. In calculations with the solvation shell method, the last term, i.e., dipolar interaction of the complex with the entire medium, was added. With this modification the prediction of the solvation shell model agrees both with the continuum model and with experiment, i.e., in water the stacked conformation of the bases is preferred.

  2. New size-expanded RNA nucleobase analogs: a detailed theoretical study.

    PubMed

    Zhang, Laibin; Zhang, Zhenwei; Ren, Tingqi; Tian, Jianxiang; Wang, Mei

    2015-04-05

    Fluorescent nucleobase analogs have attracted much attention in recent years due to their potential applications in nucleic acids research. In this work, four new size-expanded RNA base analogs were computationally designed and their structural, electronic, and optical properties are investigated by means of DFT calculations. The results indicate that these analogs can form stable Watson-Crick base pairs with natural counterparts and they have smaller ionization potentials and HOMO-LUMO gaps than natural ones. Particularly, the electronic absorption spectra and fluorescent emission spectra are calculated. The calculated excitation maxima are greatly red-shifted compared with their parental and natural bases, allowing them to be selectively excited. In gas phase, the fluorescence from them would be expected to occur around 526, 489, 510, and 462 nm, respectively. The influences of water solution and base pairing on the relevant absorption spectra of these base analogs are also examined. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. [Study on spectrum analysis of X-ray based on rotational mass effect in special relativity].

    PubMed

    Yu, Zhi-Qiang; Xie, Quan; Xiao, Qing-Quan

    2010-04-01

    Based on special relativity, the formation mechanism of characteristic X-ray has been studied, and the influence of rotational mass effect on X-ray spectrum has been given. A calculation formula of the X-ray wavelength based upon special relativity was derived. Error analysis was carried out systematically for the calculation values of characteristic wavelength, and the rules of relative error were obtained. It is shown that the values of the calculation are very close to the experimental values, and the effect of rotational mass effect on the characteristic wavelength becomes more evident as the atomic number increases. The result of the study has some reference meaning for the spectrum analysis of characteristic X-ray in application.

  4. Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis

    NASA Astrophysics Data System (ADS)

    Mills, D. A.

    2017-10-01

    In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.

  5. Radiation damage to DNA in DNA-protein complexes.

    PubMed

    Spotheim-Maurizot, M; Davídková, M

    2011-06-03

    The most aggressive product of water radiolysis, the hydroxyl (OH) radical, is responsible for the indirect effect of ionizing radiations on DNA in solution and aerobic conditions. According to radiolytic footprinting experiments, the resulting strand breaks and base modifications are inhomogeneously distributed along the DNA molecule irradiated free or bound to ligands (polyamines, thiols, proteins). A Monte-Carlo based model of simulation of the reaction of OH radicals with the macromolecules, called RADACK, allows calculating the relative probability of damage of each nucleotide of DNA irradiated alone or in complexes with proteins. RADACK calculations require the knowledge of the three dimensional structure of DNA and its complexes (determined by X-ray crystallography, NMR spectroscopy or molecular modeling). The confrontation of the calculated values with the results of the radiolytic footprinting experiments together with molecular modeling calculations show that: (1) the extent and location of the lesions are strongly dependent on the structure of DNA, which in turns is modulated by the base sequence and by the binding of proteins and (2) the regions in contact with the protein can be protected against the attack by the hydroxyl radicals via masking of the binding site and by scavenging of the radicals. 2011 Elsevier B.V. All rights reserved.

  6. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations

    PubMed Central

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    Aim The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. Background High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. Materials and methods The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. Results From designed door's thickness, the door designed by the MC simulation and Wu–McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Conclusion Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations. PMID:26900357

  7. Three-body approach to the K-d scattering length in particle basis

    NASA Astrophysics Data System (ADS)

    Bahaoui, A.; Fayard, C.; Mizutani, T.; Saghai, B.

    2002-11-01

    We report on the first calculation of the scattering length AK-d based on a relativistic three-body approach where the K¯N coupled channel two-body input amplitudes have been obtained with the chiral SU(3) constraint, but with isospin symmetry breaking effects taken into account. Results are compared with a recent calculation applying a similar set of two-body amplitudes, based on the fixed center approximation, and for which we find significant deviations from the three-body results. Effects of the deuteron D-wave component, pion-nucleon, and hyperon-nucleon interactions are also evaluated.

  8. Determination of noise equivalent reflectance for a multispectral scanner: A scanner sensitivity study

    NASA Technical Reports Server (NTRS)

    Gibbons, D. E.; Richard, R. R.

    1979-01-01

    The methods used to calculate the sensitivity parameter noise equivalent reflectance of a remote-sensing scanner are explored, and the results are compared with values measured over calibrated test sites. Data were acquired on four occasions covering a span of 4 years and providing various atmospheric conditions. One of the calculated values was based on assumed atmospheric conditions, whereas two others were based on atmospheric models. Results indicate that the assumed atmospheric conditions provide useful answers adequate for many purposes. A nomograph was developed to indicate sensitivity variations due to geographic location, time of day, and season.

  9. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  10. Computer-generated holograms and diffraction gratings in optical security applications

    NASA Astrophysics Data System (ADS)

    Stepien, Pawel J.

    2000-04-01

    The term 'computer generated hologram' (CGH) describes a diffractive structure strictly calculated and recorded to diffract light in a desired way. The CGH surface profile is a result of the wavefront calculation rather than of interference. CGHs are able to form 2D and 3D images. Optically, variable devices (OVDs) composed of diffractive gratings are often used in security applications. There are various types of optically and digitally recorded gratings in security applications. Grating based OVDs are used to record bright 2D images with limited range of cinematic effects. These effects result form various orientations or densities of recorded gratings. It is difficult to record high quality OVDs of 3D objects using gratings. Stereo grams and analogue rainbow holograms offer 3D imaging, but they are darker and have lower resolution than grating OVDs. CGH based OVDs contains unlimited range of cinematic effects and high quality 3D images. Images recorded using CGHs are usually more noisy than grating based OVDs, because of numerical inaccuracies in CGH calculation and mastering. CGH based OVDs enable smooth integration of hidden and machine- readable features within an OVD design.

  11. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  12. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  13. Calculation of phase diagrams for the FeCl2, PbCl2, and ZnCl2 binary systems by using molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Seo, Won-Gap; Matsuura, Hiroyuki; Tsukihashi, Fumitaka

    2006-04-01

    Recently, molecular dynamics (MD) simulation has been widely employed as a very useful method for the calculation of various physicochemical properties in the molten slags and fluxes. In this study, MD simulation has been applied to calculate the structural, transport, and thermodynamic properties for the FeCl2, PbCl2, and ZnCl2 systems using the Born—Mayer—Huggins type pairwise potential with partial ionic charges. The interatomic potential parameters were determined by fitting the physicochemical properties of iron chloride, lead chloride, and zinc chloride systems with experimentally measured results. The calculated structural, transport, and thermodynamic properties of pure FeCl2, PbCl2, and ZnCl2 showed the same tendency with observed results. Especially, the calculated structural properties of molten ZnCl2 and FeCl2 show the possibility of formation of polymeric network structures based on the ionic complexes of ZnCl{4/2-}, ZnCl{3/-}, FeCl{4/2-}, and FeCl{3/-}, and these calculations have successfully reproduced the measured results. The enthalpy, entropy, and Gibbs energy of mixing for the PbCl2-ZnCl2, FeCl2-PbCl2, and FeCl2-ZnCl2 systems were calculated based on the thermodynamic and structural parameters of each binary system obtained from MD simulation. The phase diagrams of the PbCl2-ZnCl2, FeCl2-PbCl2, and FeCl2-ZnCl2 systems estimated by using the calculated Gibbs energy of mixing reproduced the experimentally measured ones reasonably well.

  14. Study of activity based costing implementation for palm oil production using value-added and non-value-added activity consideration in PT XYZ palm oil mill

    NASA Astrophysics Data System (ADS)

    Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.

    2018-02-01

    Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.

  15. Volcanic ash dosage calculator: A proof-of-concept tool to support aviation stakeholders during ash events

    NASA Astrophysics Data System (ADS)

    Dacre, H.; Prata, A.; Shine, K. P.; Irvine, E.

    2017-12-01

    The volcanic ash clouds produced by Icelandic volcano Eyjafjallajökull in April/May 2010 resulted in `no fly zones' which paralysed European aircraft activity and cost the airline industry an estimated £1.1 billion. In response to the crisis, the Civil Aviation Authority (CAA), in collaboration with Rolls Royce, produced the `safe-to-fly' chart. As ash concentrations are the primary output of dispersion model forecasts, the chart was designed to illustrate how engine damage progresses as a function of ash concentration. Concentration thresholds were subsequently derived based on previous ash encounters. Research scientists and aircraft manufactures have since recognised the importance of volcanic ash dosages; the accumulated concentration over time. Dosages are an improvement to concentrations as they can be used to identify pernicious situations where ash concentrations are acceptably low but the exposure time is long enough to cause damage to aircraft engines. Here we present a proof-of-concept volcanic ash dosage calculator; an innovative, web-based research tool, developed in close collaboration with operators and regulators, which utilises interactive data visualisation to communicate the uncertainty inherent in dispersion model simulations and subsequent dosage calculations. To calculate dosages, we use NAME (Numerical Atmospheric-dispersion Modelling Environment) to simulate several Icelandic eruption scenarios, which result in tephra dispersal across the North Atlantic, UK and Europe. Ash encounters are simulated based on flight-optimal routes derived from aircraft routing software. Key outputs of the calculator include: the along-flight dosage, exposure time and peak concentration. The design of the tool allows users to explore the key areas of uncertainty in the dosage calculation and to visualise how this changes as the planned flight path is varied. We expect that this research will result in better informed decisions from key stakeholders during volcanic ash events through a deeper understanding of the associated uncertainties in dosage calculations.

  16. Development of a tool for calculating early internal doses in the Fukushima Daiichi nuclear power plant accident based on atmospheric dispersion simulation

    NASA Astrophysics Data System (ADS)

    Kurihara, Osamu; Kim, Eunjoo; Kunishima, Naoaki; Tani, Kotaro; Ishikawa, Tetsuo; Furuyama, Kazuo; Hashimoto, Shozo; Akashi, Makoto

    2017-09-01

    A tool was developed to facilitate the calculation of the early internal doses to residents involved in the Fukushima Nuclear Disaster based on atmospheric transport and dispersion model (ATDM) simulations performed using Worldwide version of System for Prediction of Environmental Emergency Information 2nd version (WSPEEDI-II) together with personal behavior data containing the history of the whereabouts of individul's after the accident. The tool generates hourly-averaged air concentration data for the simulation grids nearest to an individual's whereabouts using WSPEEDI-II datasets for the subsequent calculation of internal doses due to inhalation. This paper presents an overview of the developed tool and provides tentative comparisons between direct measurement-based and ATDM-based results regarding the internal doses received by 421 persons from whom personal behavior data available.

  17. Implementation of the common phrase index method on the phrase query for information retrieval

    NASA Astrophysics Data System (ADS)

    Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah

    2017-08-01

    As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.

  18. WE-H-207A-07: Image-Based Versus Atlas-Based Internal Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fallahpoor, M; Abbasi, M; Parach, A

    Purpose: Monte Carlo (MC) simulation is known as the gold standard method for internal dosimetry. It requires radionuclide distribution from PET or SPECT and body structure from CT for accurate dose calculation. The manual or semi-automatic segmentation of organs from CT images is a major obstacle. The aim of this study is to compare the dosimetry results based on patient’s own CT and a digital humanoid phantom as an atlas with pre-specified organs. Methods: SPECT-CT images of a 50 year old woman who underwent bone pain palliation with Samarium-153 EDTMP for osseous metastases from breast cancer were used. The anatomicalmore » date and attenuation map were extracted from SPECT/CT and three XCAT digital phantoms with different BMIs (i.e. matched (38.8) and unmatched (35.5 and 36.7) with patient’s BMI that was 38.3). Segmentation of patient’s organs in CT image was performed using itk-SNAP software. GATE MC Simulator was used for dose calculation. Specific absorbed fractions (SAFs) and S-values were calculated for the segmented organs. Results: The differences between SAFs and S-values are high using different anatomical data and range from −13% to 39% for SAF values and −109% to 79% for S-values in different organs. In the spine, the clinically important target organ for Samarium Therapy, the differences in the S-values and SAF values are higher between XCAT phantom and CT when the phantom with identical BMI is employed (53.8% relative difference in S-value and 26.8% difference in SAF). However, the whole body dose values were the same between the calculations based on the CT and XCAT with different BMIs. Conclusion: The results indicated that atlas-based dosimetry using XCAT phantom even with matched BMI for patient leads to considerable errors as compared to image-based dosimetry that uses the patient’s own CT Patient-specific dosimetry using CT image is essential for accurate results.« less

  19. Biospecimen User Fees: Global Feedback on a Calculator Tool.

    PubMed

    Matzke, Lise A M; Babinszky, Sindy; Slotty, Alex; Meredith, Anna; Castillo-Pelayo, Tania; Henderson, Marianne K; Simeon-Dubach, Daniel; Schacter, Brent; Watson, Peter H

    2017-02-01

    The notion of attributing user fees to researchers for biospecimens provided by biobanks has been discussed frequently in the literature. However, the considerations around how to attribute the cost for these biospecimens and data have, until recently, not been well described. Common across most biobank disciplines are similar factors that influence user fees such as capital and operating costs, internal and external demand, and market competition. A biospecimen user fee calculator tool developed by CTRNet, a tumor biobank network, was published in 2014 and is accessible online at www.biobanking.org . The next year a survey was launched that tested the applicability of this user fee tool among a global health research biobank user base, including both cancer and noncancer biobanking. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated variation in estimated pricing that was reduced by calculated pricing. These results are similar to those found in a similar previous study restricted to a group of Canadian tumor biobanks. We conclude that the use of a biospecimen user fee calculator contributes to reduced variation of user fees and for biobank groups (e.g., biobank networks), could become an important part of a harmonization strategy.

  20. Model for Correlating Real-Time Survey Results to Contaminant Concentrations - 12183

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Stuart A.

    2012-07-01

    The U.S. Environmental Protection Agency (EPA) Superfund program is developing a new Counts Per Minute (CPM) calculator to correlate real-time survey results, which are often expressed as counts per minute, to contaminant concentrations that are more typically provided in risk assessments or for cleanup levels, usually expressed in pCi/g or pCi/m{sup 2}. Currently there is no EPA guidance for Superfund sites on correlating count per minute field survey readings back to risk, dose, or other ARAR based concentrations. The CPM calculator is a web-based model that estimates a gamma detector response for a given level of contamination. The intent ofmore » the CPM calculator is to facilitate more real-time measurements within a Superfund response framework. The draft of the CPM calculator is still undergoing internal EPA review. This will be followed by external peer review. It is expected that the CPM calculator will at least be in peer review by the time of WM2012 and possibly finalized at that time. The CPM calculator should facilitate greater use of real-time measurement at Superfund sites. The CPM calculator may also standardize the process of converting lab data to real time measurements. It will thus lessen the amount of lab sampling that is needed for site characterization and confirmation surveys, but it will not remove the need for sampling. (authors)« less

  1. Biospecimen User Fees: Global Feedback on a Calculator Tool

    PubMed Central

    Babinszky, Sindy; Slotty, Alex; Meredith, Anna; Castillo-Pelayo, Tania; Henderson, Marianne K.; Simeon-Dubach, Daniel; Schacter, Brent; Watson, Peter H.

    2017-01-01

    The notion of attributing user fees to researchers for biospecimens provided by biobanks has been discussed frequently in the literature. However, the considerations around how to attribute the cost for these biospecimens and data have, until recently, not been well described. Common across most biobank disciplines are similar factors that influence user fees such as capital and operating costs, internal and external demand, and market competition. A biospecimen user fee calculator tool developed by CTRNet, a tumor biobank network, was published in 2014 and is accessible online at www.biobanking.org. The next year a survey was launched that tested the applicability of this user fee tool among a global health research biobank user base, including both cancer and noncancer biobanking. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated variation in estimated pricing that was reduced by calculated pricing. These results are similar to those found in a similar previous study restricted to a group of Canadian tumor biobanks. We conclude that the use of a biospecimen user fee calculator contributes to reduced variation of user fees and for biobank groups (e.g., biobank networks), could become an important part of a harmonization strategy. PMID:27576065

  2. Treating Subvalence Correlation Effects in Domain Based Pair Natural Orbital Coupled Cluster Calculations: An Out-of-the-Box Approach.

    PubMed

    Bistoni, Giovanni; Riplinger, Christoph; Minenkov, Yury; Cavallo, Luigi; Auer, Alexander A; Neese, Frank

    2017-07-11

    The validity of the main approximations used in canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) in standard chemical applications is discussed. In particular, we investigate the dependence of the results on the number of electrons included in the correlation treatment in frozen-core (FC) calculations and on the main threshold governing the accuracy of DLPNO all-electron (AE) calculations. Initially, scalar relativistic orbital energies for the ground state of the atoms from Li to Rn in the periodic table are calculated. An energy criterion is used for determining the orbitals that can be excluded from the correlation treatment in FC coupled cluster calculations without significant loss of accuracy. The heterolytic dissociation energy (HDE) of a series of metal compounds (LiF, NaF, AlF 3 , CaF 2 , CuF, GaF 3 , YF 3 , AgF, InF 3 , HfF 4 , and AuF) is calculated at the canonical CCSD(T) level, and the dependence of the results on the number of correlated electrons is investigated. Although for many of the studied reactions subvalence correlation effects contribute significantly to the HDE, the use of an energy criterion permits a conservative definition of the size of the core, allowing FC calculations to be performed in a black-box fashion while retaining chemical accuracy. A comparison of the CCSD and the DLPNO-CCSD methods in describing the core-core, core-valence, and valence-valence components of the correlation energy is given. It is found that more conservative thresholds must be used for electron pairs containing at least one core electron in order to achieve high accuracy in AE DLPNO-CCSD calculations relative to FC calculations. With the new settings, the DLPNO-CCSD method reproduces canonical CCSD results in both AE and FC calculations with the same accuracy.

  3. Pricing of premiums for equity-linked life insurance based on joint mortality models

    NASA Astrophysics Data System (ADS)

    Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.

    2018-03-01

    Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result

  4. General Rule of Negative Effective Ueff System & Materials Design of High-Tc Superconductors by ab initio Calculations

    NASA Astrophysics Data System (ADS)

    Katayama-Yoshida, Hiroshi; Nakanishi, Akitaka; Uede, Hiroki; Takawashi, Yuki; Fukushima, Tetsuya; Sato, Kazunori

    2014-03-01

    Based upon ab initio electronic structure calculation, I will discuss the general rule of negative effective U system by (1) exchange-correlation-induced negative effective U caused by the stability of the exchange-correlation energy in Hund's rule with high-spin ground states of d5 configuration, and (2) charge-excitation-induced negative effective U caused by the stability of chemical bond in the closed-shell of s2, p6, and d10 configurations. I will show the calculated results of negative effective U systems such as hole-doped CuAlO2 and CuFeS2. Based on the total energy calculations of antiferromagnetic and ferromagnetic states, I will discuss the magnetic phase diagram and superconductivity upon hole doping. I also discuss the computational materials design method of high-Tc superconductors by ab initio calculation to go beyond LDA and multi-scale simulations.

  5. First-principles calculations on thermodynamic properties of BaTiO3 rhombohedral phase.

    PubMed

    Bandura, Andrei V; Evarestov, Robert A

    2012-07-05

    The calculations based on the linear combination of atomic orbitals have been performed for the low-temperature phase of BaTiO(3) crystal. Structural and electronic properties, as well as phonon frequencies were obtained using hybrid PBE0 exchange-correlation functional. The calculated frequencies and total energies at different volumes have been used to determine the equation of state and thermal contribution to the Helmholtz free energy within the quasiharmonic approximation. For the first time, the bulk modulus, volume thermal expansion coefficient, heat capacity, and Grüneisen parameters in BaTiO(3) rhombohedral phase have been estimated at zero pressure and temperatures form 0 to 200 K, based on the results of first-principles calculations. Empirical equation has been proposed to reproduce the temperature dependence of the calculated quantities. The agreement between the theoretical and experimental thermodynamic properties was found to be satisfactory. Copyright © 2012 Wiley Periodicals, Inc.

  6. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    NASA Astrophysics Data System (ADS)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  7. Measurements of UGR of LED light by a DSLR colorimeter

    NASA Astrophysics Data System (ADS)

    Hsu, Shau-Wei; Chen, Cheng-Hsien; Jiaan, Yuh-Der

    2012-10-01

    We have developed an image-based measurement method on UGR (unified glare rating) of interior lighting environment. A calibrated DSLR (digital single-lens reflex camera) with an ultra wide-angle lens was used to measure the luminance distribution, by which the corresponding parameters can be automatically calculated. A LED lighting was placed in a room and measured at various positions and directions to study the properties of UGR. The testing results are fitted with visual experiences and UGR principles. To further examine the results, a spectroradiometer and an illuminance meter were respectively used to measure the luminance and illuminance at the same position and orientation of the DSLR. The calculation of UGR by this image-based method may solve the problem of non-uniform luminance-distribution of LED lighting, and was studied on segmentation of the luminance graph for the calculations.

  8. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  9. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations.

    PubMed

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.

  10. Calculation of the acid-base equilibrium constants at the alumina/electrolyte interface from the ph dependence of the adsorption of singly charged ions (Na+, Cl-)

    NASA Astrophysics Data System (ADS)

    Gololobova, E. G.; Gorichev, I. G.; Lainer, Yu. A.; Skvortsova, I. V.

    2011-05-01

    A procedure was proposed for the calculation of the acid-base equilibrium constants at an alumina/electrolyte interface from experimental data on the adsorption of singly charged ions (Na+, Cl-) at various pH values. The calculated constants (p K {1/0}= 4.1, p K {2/0}= 11.9, p K {3/0}= 8.3, and p K {4/0}= 7.7) are shown to agree with the values obtained from an experimental pH dependence of the electrokinetic potential and the results of potentiometric titration of Al2O3 suspensions.

  11. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  12. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  13. Controlled and Uncontrolled Subject Descriptions in the CF Database: A Comparison of Optimal Cluster-Based Retrieval Results.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.

    1993-01-01

    Describes a study conducted on the cystic fibrosis (CF) database, a subset of MEDLINE, that investigated clustering structure and the effectiveness of cluster-based retrieval as a function of the exhaustivity of the uncontrolled subject descriptions. Results are compared to calculations for controlled descriptions based on Medical Subject Headings…

  14. TrackEtching - A Java based code for etched track profile calculations in SSNTDs

    NASA Astrophysics Data System (ADS)

    Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.

    2017-09-01

    A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.

  15. The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.

    PubMed

    Sredl, Darlene

    2006-01-01

    Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.

  16. Enhancement of the output emission efficiency of thin-film photoluminescence composite structures based on PbSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.

    2010-12-15

    The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less

  17. Orbital-free extension to Kohn-Sham density functional theory equation of state calculations: Application to silicon dioxide

    DOE PAGES

    Sjostrom, Travis; Crockett, Scott

    2015-09-02

    The liquid regime equation of state of silicon dioxide SiO 2 is calculated via quantum molecular dynamics in the density range of 5 to 15 g/cc and with temperatures from 0.5 to 100 eV, including the α-quartz and stishovite phase Hugoniot curves. Below 8 eV calculations are based on Kohn-Sham density functional theory (DFT), and above 8 eV a new orbital-free DFT formulation, presented here, based on matching Kohn-Sham DFT calculations is employed. Recent experimental shock data are found to be in very good agreement with the current results. Finally both experimental and simulation data are used in constructing amore » new liquid regime equation of state table for SiO 2.« less

  18. Electron- and positron-impact atomic scattering calculations using propagating exterior complex scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.

    2007-11-01

    Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.

  19. Equation of state of detonation products based on statistical mechanical theory

    NASA Astrophysics Data System (ADS)

    Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng

    2015-06-01

    The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.

  20. Equation of state of detonation products based on statistical mechanical theory

    NASA Astrophysics Data System (ADS)

    Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team

    2013-06-01

    The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.

  1. The Thermochemical Stability of Ionic Noble Gas Compounds.

    ERIC Educational Resources Information Center

    Purser, Gordon H.

    1988-01-01

    Presents calculations that suggest stoichiometric, ionic, and noble gas-metal compounds may be stable. Bases calculations on estimated values of electron affinity, anionic radius for the noble gases and for the Born exponents of resulting crystals. Suggests the desirability of experiments designed to prepare compounds containing anionic,…

  2. Ab initio calculations of the lattice dynamics of silver halides

    NASA Astrophysics Data System (ADS)

    Gordienko, A. B.; Kravchenko, N. G.; Sedelnikov, A. N.

    2010-12-01

    Based on ab initio pseudopotential calculations, the results of investigations of the lattice dynamics of silver halides AgHal (Hal = Cl, Br, I) are presented. Equilibrium lattice parameters, phonon spectra, frequency densities and effective atomic-charge values are obtained for all types of crystals under study.

  3. One-dimensional thermal evolution calculation based on a mixing length theory: Application to Saturnian icy satellites

    NASA Astrophysics Data System (ADS)

    Kamata, S.

    2017-12-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.

  4. A formula for calculating theoretical photoelectron fluxes resulting from the He/+/ 304 A solar spectral line

    NASA Technical Reports Server (NTRS)

    Richards, P. G.; Torr, D. G.

    1981-01-01

    A simplified method for the evaluation of theoretical photoelectron fluxes in the upper atmosphere resulting from the solar radiation at 304 A is presented. The calculation is based on considerations of primary and cascade (secondary) photoelectron production in the two-stream model, where photoelectron transport is described by two electron streams, one moving up and one moving down, and of loss rates due to collisions with neutral gases and thermal electrons. The calculation is illustrated for the case of photoelectrons at an energy of 24.5 eV, and it is noted that the 24.5-eV photoelectron flux may be used to monitor variations in the solar 304 A flux. Theoretical calculations based on various ionization and excitation cross sections of Banks et al. (1974) are shown to be in generally good agreement with AE-E measurements taken between 200 and 235 km, however the use of more recent, larger cross sections leads to photoelectron values a factor of two smaller than observations but in agreement with previous calculations. It is concluded that a final resolution of the photoelectron problem may depend on a reevaluation of the inelastic electron collision cross sections.

  5. Nuclear-size correction to the Lamb shift of one-electron atoms

    NASA Astrophysics Data System (ADS)

    Yerokhin, Vladimir A.

    2011-01-01

    The nuclear-size effect on the one-loop self-energy and vacuum polarization is evaluated for the 1s, 2s, 3s, 2p1/2, and 2p3/2 states of hydrogen-like ions. The calculation is performed to all orders in the nuclear binding strength parameter Zα. Detailed comparison is made with previous all-order calculations and calculations based on the expansion in the parameter Zα. Extrapolation of the all-order numerical results obtained toward Z=1 provides results for the radiative nuclear-size effect on the hydrogen Lamb shift.

  6. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  7. Analysis of radiation safety for Small Modular Reactor (SMR) on PWR-100 MWe type

    NASA Astrophysics Data System (ADS)

    Udiyani, P. M.; Husnayani, I.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Indonesia as an archipelago country, including big, medium and small islands is suitable to construction of Small Medium/Modular reactors. Preliminary technology assessment on various SMR has been started, indeed the SMR is grouped into Light Water Reactor, Gas Cooled Reactor, and Solid Cooled Reactor and from its site it is group into Land Based reactor and Water Based Reactor. Fukushima accident made people doubt about the safety of Nuclear Power Plant (NPP), which impact on the public perception of the safety of nuclear power plants. The paper will describe the assessment of safety and radiation consequences on site for normal operation and Design Basis Accident postulation of SMR based on PWR-100 MWe in Bangka Island. Consequences of radiation for normal operation simulated for 3 units SMR. The source term was generated from an inventory by using ORIGEN-2 software and the consequence of routine calculated by PC-Cream and accident by PC Cosyma. The adopted methodology used was based on site-specific meteorological and spatial data. According to calculation by PC-CREAM 08 computer code, the highest individual dose in site area for adults is 5.34E-02 mSv/y in ESE direction within 1 km distance from stack. The result of calculation is that doses on public for normal operation below 1mSv/y. The calculation result from PC Cosyma, the highest individual dose is 1.92.E+00 mSv in ESE direction within 1km distance from stack. The total collective dose (all pathway) is 3.39E-01 manSv, with dominant supporting from cloud pathway. Results show that there are no evacuation countermeasure will be taken based on the regulation of emergency.

  8. Calculation of electrostatic fields in periodic structures of complex shape

    NASA Technical Reports Server (NTRS)

    Kravchenko, V. F.

    1978-01-01

    A universal algorithm is presented for calculating electrostatic fields in an infinite periodic structure consisting of electrodes of arbitrary shape which are located in mirror-symmetrical manner along the axis of electron-beam propagation. The method is based on the theory of R-functions, and the differential operators which are derived on the basis of the functions. Numerical results are presented and the accuracy of the results is examined.

  9. [Development and effectiveness of a drug dosage calculation training program using cognitive loading theory based on smartphone application].

    PubMed

    Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon

    2012-10-01

    This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.

  10. Cost-effectiveness of rituximab as maintenance treatment for relapsed follicular lymphoma: results of a population-based study.

    PubMed

    Blommestein, Hedwig M; Issa, Djamila E; Pompen, Marjolein; Ten Hoor, Gerhard; Hogendoorn, Mels; Joosten, Peter; Zweegman, Sonja; Huijgens, Peter C; Uyl-de Groot, Carin A

    2014-01-01

    On the basis of two population-based registries, our study aims to calculate the real-world cost-effectiveness of rituximab maintenance compared with observation in relapsed or refractory follicular lymphoma patients who responded to second-line chemotherapy. Data were obtained from the EORTC20981 trial, the Netherlands Cancer Registry and two population-based registries. A Markov model was developed to calculate cost per life year gained (LYG) and quality-adjusted life years (QALYs) for three scenarios. Our real-world patients were (62 years) 6 to 7 years older and had higher complete response rates to second-line chemotherapy than the trial population. Differences between the real-world rituximab and observation group were observed for second-line chemotherapy and disease progression. Groups were more balanced after using propensity matching. Relying entirely on updated trial results (scenario1) in combination with local cost data resulted in ratios of €11,259 per LYG and €12,655 per QALY. For scenario2, consisting of trial efficacy and matched real-world costs, ratios of €21,202 per LYG and €23,821 per QALY were calculated. Using real-world matched evidence (scenario3) for both effectiveness and costs showed ratios of €10,591 per LYG and €11,245 per QALY. Although differences in real-world and trial population were found, using real-world data as well as results from long-term trial follow-up showed favourable ICERs for rituximab maintenance. Nevertheless, results showed that caution is required with data synthesis, interpretation and generalisability of results. As different scenarios provide answers to different questions, we recommend healthcare decision-makers to recognise the importance of calculating several cost-effectiveness scenarios. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code

    NASA Astrophysics Data System (ADS)

    Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.

    2006-02-01

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  12. Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.

    PubMed

    Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste

    2009-12-21

    The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.

  13. Refractive laser beam shaping by means of a functional differential equation based design approach.

    PubMed

    Duerr, Fabian; Thienpont, Hugo

    2014-04-07

    Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.

  14. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  15. Calculation of short-wave signal amplitude on the basis of the waveguide approach and the method of characteristics

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. Ia.; Tumatov, K. I.

    The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.

  16. Patient-specific CT dosimetry calculation: a feasibility study.

    PubMed

    Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W

    2011-11-15

    Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.

  17. Fully automated lobe-based airway taper index calculation in a low dose MDCT CF study over 4 time-points

    NASA Astrophysics Data System (ADS)

    Weinheimer, Oliver; Wielpütz, Mark O.; Konietzke, Philip; Heussel, Claus P.; Kauczor, Hans-Ulrich; Brochhausen, Christoph; Hollemann, David; Savage, Dasha; Galbán, Craig J.; Robinson, Terry E.

    2017-02-01

    Cystic Fibrosis (CF) results in severe bronchiectasis in nearly all cases. Bronchiectasis is a disease where parts of the airways are permanently dilated. The development and the progression of bronchiectasis is not evenly distributed over the entire lungs - rather, individual functional units are affected differently. We developed a fully automated method for the precise calculation of lobe-based airway taper indices. To calculate taper indices, some preparatory algorithms are needed. The airway tree is segmented, skeletonized and transformed to a rooted acyclic graph. This graph is used to label the airways. Then a modified version of the previously validated integral based method (IBM) for airway geometry determination is utilized. The rooted graph, the airway lumen and wall information are then used to calculate the airway taper indices. Using a computer-generated phantom simulating 10 cross sections of airways we present results showing a high accuracy of the modified IBM. The new taper index calculation method was applied to 144 volumetric inspiratory low-dose MDCT scans. The scans were acquired from 36 children with mild CF at 4 time-points (baseline, 3 month, 1 year, 2 years). We found a moderate correlation with the visual lobar Brody bronchiectasis scores by three raters (r2 = 0.36, p < .0001). The taper index has the potential to be a precise imaging biomarker but further improvements are needed. In combination with other imaging biomarkers, taper index calculation can be an important tool for monitoring the progression and the individual treatment of patients with bronchiectasis.

  18. Use of A-Train Aerosol Observations to Constrain Direct Aerosol Radiative Effects (DARE) Comparisons with Aerocom Models and Uncertainty Assessments

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.

    2017-01-01

    We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.

  19. Fiber optic based multiparametric spectroscopy in vivo: Toward a new quantitative tissue vitality index

    NASA Astrophysics Data System (ADS)

    Kutai-Asis, Hofit; Barbiro-Michaely, Efrat; Deutsch, Assaf; Mayevsky, Avraham

    2006-02-01

    In our previous publication (Mayevsky et al SPIE 5326: 98-105, 2004) we described a multiparametric fiber optic system enabling the evaluation of 4 physiological parameters as indicators of tissue vitality. Since the correlation between the various parameters may differ in various pathophysiological conditions there is a need for an objective quantitative index that will integrate the relative changes measured in real time by the multiparametric monitoring system into a single number-vitality index. Such an approach to calculate tissue vitality index is critical for the possibility to use such an instrument in clinical environments. In the current presentation we are reporting our preliminary results indicating that calculation of an objective tissue vitality index is feasible. We used an intuitive empirical approach based on the comparison between the calculated index by the computer and the subjective evaluation made by an expert in the field of physiological monitoring. We used the in vivo brain of rats as an animal model in our current studies. The rats were exposed to anoxia, ischemia and cortical spreading depression and the responses were recorded in real time. At the end of the monitoring session the results were analyzed and the tissue vitality index was calculated offline. Mitochondrial NADH, tissue blood flow and oxy-hemoglobin were used to calculate the vitality index of the brain in vivo, where each parameter received a different weight, in each experiment type based on their significance. It was found that the mitochondrial NADH response was the main factor affected the calculated vitality index.

  20. Prediction of surface tension of HFD-like fluids using the Fowler’s approximation

    NASA Astrophysics Data System (ADS)

    Goharshadi, Elaheh K.; Abbaspour, Mohsen

    2006-09-01

    The Fowler's expression for calculation of the reduced surface tension has been used for simple fluids using the Hartree-Fock Dispersion (HFD)-like potential (HFD-like fluids) obtained from the inversion of the viscosity collision integrals at zero pressure. In order to obtain the RDFs values needed for calculation of the surface tension, we have performed the MD simulation at different temperatures and densities and then fitted with an expression and compared the resulting RDFs with the experiment. Our results are in excellent accordance with experimental values when the vapor density has been considered, especially at high temperatures. We have also calculated the surface tension using a RDF's expression based on the Lennard-Jones (LJ) potential which was in good agreement with the molecular dynamics simulations. In this work, we have shown that our results based on HFD-like potential can describe the temperature dependence of the surface tension superior than that of LJ potential.

  1. [DNAStat, version 1.2 -- a software package for processing genetic profile databases and biostatistical calculations].

    PubMed

    Berent, Jarosław

    2007-01-01

    This paper presents the new DNAStat version 1.2 for processing genetic profile databases and biostatistical calculations. This new version contains, besides all the options of its predecessor 1.0, a calculation-results file export option in .xls format for Microsoft Office Excel, as well as the option of importing/exporting the population base of systems as .txt files for processing in Microsoft Notepad or EditPad

  2. Electronic properties of Bilayer Fullerene onions

    NASA Astrophysics Data System (ADS)

    Pincak, R.; Shunaev, V. V.; Smotlacha, J.; Slepchenkov, M. M.; Glukhova, O. E.

    2017-10-01

    The HOMO-LUMO gaps of the bilayer fullerene onions were investigated. For this purpose, the HOMO and LUMO energies were calculated for the isolated fullerenes using the parametrization of the tight binding method with the Harrison-Goodwin modification. Next, the difference of the Fermi levels of the outer and inner shell was calculated by considering the hybridization of the orbitals on the base of the geometric parameters. The results were obtained by the combination of these calculations.

  3. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    NASA Astrophysics Data System (ADS)

    Graves, Robert; Jordan, Thomas H.; Callaghan, Scott; Deelman, Ewa; Field, Edward; Juve, Gideon; Kesselman, Carl; Maechling, Philip; Mehta, Gaurang; Milner, Kevin; Okaya, David; Small, Patrick; Vahi, Karan

    2011-03-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i.e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process.

  4. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    USGS Publications Warehouse

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process. ?? 2010 Springer Basel AG.

  5. Influence of kinetics on the determination of the surface reactivity of oxide suspensions by acid-base titration.

    PubMed

    Duc, M; Adekola, F; Lefèvre, G; Fédoroff, M

    2006-11-01

    The effect of acid-base titration protocol and speed on pH measurement and surface charge calculation was studied on suspensions of gamma-alumina, hematite, goethite, and silica, whose size and porosity have been well characterized. The titration protocol has an important effect on surface charge calculation as well as on acid-base constants obtained by fitting of the titration curves. Variations of pH versus time after addition of acid or base to the suspension were interpreted as diffusion processes. Resulting apparent diffusion coefficients depend on the nature of the oxide and on its porosity.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moura, Eduardo S., E-mail: emoura@wisc.edu; Micka, John A.; Hammer, Cliff G.

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. Tomore » compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The TPS relative differences with the Acuros™ algorithm were similar in both experimental and simulated setups. The discrepancy between the BrachyVision™, Acuros™, and TG-43 dose responses in the phantom described by this work exceeded 12% for certain setups. Conclusions: The results derived from the phantom measurements show good agreement with the simulations and TPS calculations, using Acuros™ algorithm. Differences in the dose responses were evident in the experimental results when heterogeneous materials were introduced. These measurements prove the usefulness of the heterogeneous phantom for verification of HDR treatment planning systems based on model-based dose calculation algorithms.« less

  7. Radiometric evaluation of diglycolamide resins for the chromatographic separation of actinium from fission product lanthanides

    DOE PAGES

    Radchenko, Valery; Mastren, Tara; Meyer, Catherine A. L.; ...

    2017-07-20

    Actinium-225 is a potential Targeted Alpha Therapy (TAT) isotope. It can be generated with high energy (≥ 100 MeV) proton irradiation of thorium targets. The main challenge in the chemical recovery of 225Ac lies in the separation from thorium and many fission by-products most importantly radiolanthanides. We recently developed a separation strategy based on a combination of cation exchange and extraction chromatography to isolate and purify 225Ac. In this study, actinium and lanthanide equilibrium distribution coefficients and column elution behavior for both TODGA (N,N,N',N'-tetra- n-octyldiglycolamide) and TEHDGA (N,N,N',N'-tetrakis-2-ethylhexyldiglycolamide) were determined. Density functional theory (DFT) calculations were performed and were inmore » agreement with experimental observations providing the foundation for understanding of the selectivity for Ac and lanthanides on different DGA (diglycolamide) based resins. The results of Gibbs energy (ΔG aq) calculations confirm significantly higher selectivity of DGA based resins for Ln III over Ac III in the presence of nitrate. As a result, DFT calculations and experimental results reveal that Ac chemistry cannot be predicted from lanthanide behavior under comparable circumstances.« less

  8. Radiometric evaluation of diglycolamide resins for the chromatographic separation of actinium from fission product lanthanides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radchenko, Valery; Mastren, Tara; Meyer, Catherine A. L.

    Actinium-225 is a potential Targeted Alpha Therapy (TAT) isotope. It can be generated with high energy (≥ 100 MeV) proton irradiation of thorium targets. The main challenge in the chemical recovery of 225Ac lies in the separation from thorium and many fission by-products most importantly radiolanthanides. We recently developed a separation strategy based on a combination of cation exchange and extraction chromatography to isolate and purify 225Ac. In this study, actinium and lanthanide equilibrium distribution coefficients and column elution behavior for both TODGA (N,N,N',N'-tetra- n-octyldiglycolamide) and TEHDGA (N,N,N',N'-tetrakis-2-ethylhexyldiglycolamide) were determined. Density functional theory (DFT) calculations were performed and were inmore » agreement with experimental observations providing the foundation for understanding of the selectivity for Ac and lanthanides on different DGA (diglycolamide) based resins. The results of Gibbs energy (ΔG aq) calculations confirm significantly higher selectivity of DGA based resins for Ln III over Ac III in the presence of nitrate. As a result, DFT calculations and experimental results reveal that Ac chemistry cannot be predicted from lanthanide behavior under comparable circumstances.« less

  9. Noise in x-ray grating-based phase-contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Thomas; Bartl, Peter; Bayer, Florian

    Purpose: Grating-based x-ray phase-contrast imaging is a fast developing new modality not only for medical imaging, but as well for other fields such as material sciences. While these many possible applications arise, the knowledge of the noise behavior is essential. Methods: In this work, the authors used a least squares fitting algorithm to calculate the noise behavior of the three quantities absorption, differential phase, and dark-field image. Further, the calculated error formula of the differential phase image was verified by measurements. Therefore, a Talbot interferometer was setup, using a microfocus x-ray tube as source and a Timepix detector for photonmore » counting. Additionally, simulations regarding this topic were performed. Results: It turned out that the variance of the reconstructed phase is only dependent of the total number of photons used to generate the phase image and the visibility of the experimental setup. These results could be evaluated in measurements as well as in simulations. Furthermore, the correlation between absorption and dark-field image was calculated. Conclusions: These results provide the understanding of the noise characteristics of grating-based phase-contrast imaging and will help to improve image quality.« less

  10. CDMBE: A Case Description Model Based on Evidence

    PubMed Central

    Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing

    2015-01-01

    By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006

  11. Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)

    NASA Astrophysics Data System (ADS)

    Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur

    2017-09-01

    The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.

  12. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  13. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  14. Measurement and calculation of forces in a magnetic journal bearing actuator

    NASA Technical Reports Server (NTRS)

    Knight, Josiah; Mccaul, Edward; Xia, Zule

    1991-01-01

    Numerical calculations and experimental measurements of forces from an actuator of the type used in active magnetic journal bearings are presented. The calculations are based on solution of the scalar magnetic potential field in and near the gap regions. The predicted forces from single magnet with steady current are compared with experimental measurements in the same geometry. The measured forces are smaller than calculated ones in the principal direction but are larger than calculated in the normal direction. This combination of results indicate that material and spatial effects other than saturation play roles in determining the force available from an actuator.

  15. Development of a risk-based environmental management tool for drilling discharges. Summary of a four-year project.

    PubMed

    Singsaas, Ivar; Rye, Henrik; Frost, Tone Karin; Smit, Mathijs G D; Garpestad, Eimund; Skare, Ingvild; Bakke, Knut; Veiga, Leticia Falcao; Buffagni, Melania; Follum, Odd-Arne; Johnsen, Ståle; Moltu, Ulf-Einar; Reed, Mark

    2008-04-01

    This paper briefly summarizes the ERMS project and presents the developed model by showing results from environmental fates and risk calculations of a discharge from offshore drilling operations. The developed model calculates environmental risks for the water column and sediments resulting from exposure to toxic stressors (e.g., chemicals) and nontoxic stressors (e.g., suspended particles, sediment burial). The approach is based on existing risk assessment techniques described in the European Union technical guidance document on risk assessment and species sensitivity distributions. The model calculates an environmental impact factor, which characterizes the overall potential impact on the marine environment in terms of potentially impacted water volume and sediment area. The ERMS project started in 2003 and was finalized in 2007. In total, 28 scientific reports and 9 scientific papers have been delivered from the ERMS project (http://www.sintef.no/erms).

  16. [Measurement and analysis on complex refraction indices of pear pollen in infrared band].

    PubMed

    Li, Le; Hu, Yi-hua; Gu, You-lin; Chen, Wei; Zhao, Yi-zheng; Chen, Shan-jing

    2015-01-01

    Pollen is an important part of bioaerosols, and its complex refractive index is a crucial parameter for study on optical characteristics and detection, identification of bioaerosols. The reflection spectra of pear pollen within the 2. 5 - 15µm waveband were measured by squash method. Based on the measured data, the complex refractive index of pear pollen within the wave-band of 2. 5 to 15 µm was calculated by using Kramers-Kroning (K-K) relation, and calculation deviation about incident angle and different reflectivities at high and low frequencies.were analyzed. The results indicate that 18 degrees angle of incidence and different reflectivities at high and low frequencies have little effect on the results, and it is practicable to calculate the complex refractive index of pollen based on its reflection spectral data. The data of complex refractive index of pollen have some reference value for optical characteristics of pollen, detection and identification of bioaerosols.

  17. Irreducible correlation functions of the S matrix in the coordinate representation: application in calculating Lorentzian half-widths and shifts.

    PubMed

    Ma, Q; Tipping, R H; Boulet, C

    2006-01-07

    By introducing the coordinate representation, the derivation of the perturbation expansion of the Liouville S matrix is formulated in terms of classically behaved autocorrelation functions. Because these functions are characterized by a pair of irreducible tensors, their number is limited to a few. They represent how the overlaps of the potential components change with a time displacement, and under normal conditions, their magnitudes decrease by several orders of magnitude when the displacement reaches several picoseconds. The correlation functions contain all dynamical information of the collision processes necessary in calculating half-widths and shifts and can be easily derived with high accuracy. Their well-behaved profiles, especially the rapid decrease of the magnitude, enables one to transform easily the dynamical information contained in them from the time domain to the frequency domain. More specifically, because these correlation functions are well time limited, their continuous Fourier transforms should be band limited. Then, the latter can be accurately replaced by discrete Fourier transforms and calculated with a standard fast Fourier transform method. Besides, one can easily calculate their Cauchy principal integrations and derive all functions necessary in calculating half-widths and shifts. A great advantage resulting from introducing the coordinate representation and choosing the correlation functions as the starting point is that one is able to calculate the half-widths and shifts with high accuracy, no matter how complicated the potential models are and no matter what kind of trajectories are chosen. In any case, the convergence of the calculated results is always guaranteed. As a result, with this new method, one can remove some uncertainties incorporated in the current width and shift studies. As a test, we present calculated Raman Q linewidths for the N2-N2 pair based on several trajectories, including the more accurate "exact" ones. Finally, by using this new method as a benchmark, we have carried out convergence checks for calculated values based on usual methods and have found that some results in the literature are not converged.

  18. Developing a Treatment Planning Software Based on TG-43U1 Formalism for Cs-137 LDR Brachytherapy.

    PubMed

    Sina, Sedigheh; Faghihi, Reza; Soleimani Meigooni, Ali; Siavashpour, Zahra; Mosleh-Shirazi, Mohammad Amin

    2013-08-01

    The old Treatment Planning Systems (TPSs) used for intracavitary brachytherapy with Cs-137 Selectron source utilize traditional dose calculation methods, considering each source as a point source. Using such methods introduces significant errors in dose estimation. As of 1995, TG-43 is used as the main dose calculation formalism in treatment TPSs. The purpose of this study is to design and establish a treatment planning software for Cs-137 Solectron brachytherapy source, based on TG-43U1 formalism by applying the effects of the applicator and dummy spacers. Two softwares used for treatment planning of Cs-137 sources in Iran (STPS and PLATO), are based on old formalisms. The purpose of this work is to establish and develop a TPS for Selectron source based on TG-43 formalism. In this planning system, the dosimetry parameters of each pellet in different places inside applicators were obtained by MCNP4c code. Then the dose distribution around every combination of active and inactive pellets was obtained by summing the doses. The accuracy of this algorithm was checked by comparing its results for special combination of active and inactive pellets with MC simulations. Finally, the uncertainty of old dose calculation formalism was investigated by comparing the results of STPS and PLATO softwares with those obtained by the new algorithm. For a typical arrangement of 10 active pellets in the applicator, the percentage difference between doses obtained by the new algorithm at 1cm distance from the tip of the applicator and those obtained by old formalisms is about 30%, while the difference between the results of MCNP and the new algorithm is less than 5%. According to the results, the old dosimetry formalisms, overestimate the dose especially towards the applicator's tip. While the TG-43U1 based software perform the calculations more accurately.

  19. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  20. RadShield: semiautomated shielding design using a floor plan driven graphical user interface

    PubMed Central

    Wu, Dee H.; Yang, Kai; Rutel, Isaac B.

    2016-01-01

    The purpose of this study was to introduce and describe the development of RadShield, a Java‐based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air‐kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry‐based approach and a manual approach. A series of geometry‐based equations were derived giving the maximum air‐kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)‐certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air‐kerma rate was compared against the geometry‐based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry‐based approach and RadShield's approach in finding the magnitude and location of the maximum air‐kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheterization labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air‐kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X‐ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air‐kerma rate or barrier thickness. PACS number(s): 87.55.N, 87.52.‐g, 87.59.Bh, 87.57.‐s PMID:27685128

  1. RadShield: semiautomated shielding design using a floor plan driven graphical user interface.

    PubMed

    DeLorenzo, Matthew C; Wu, Dee H; Yang, Kai; Rutel, Isaac B

    2016-09-08

    The purpose of this study was to introduce and describe the development of RadShield, a Java-based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air-kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry-based approach and a manual approach. A series of geometry-based equations were derived giv-ing the maximum air-kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)-certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air-kerma rate was compared against the geometry-based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry-based approach and RadShield's approach in finding the magnitude and location of the maximum air-kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheteriza-tion labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air-kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X-ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air-kerma rate or barrier thickness. © 2016 The Authors.

  2. Molecular structure, spectroscopic studies and first-order molecular hyperpolarizabilities of ferulic acid by density functional study

    NASA Astrophysics Data System (ADS)

    Sebastian, S.; Sundaraganesan, N.; Manoharan, S.

    2009-10-01

    Quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of ferulic acid (FA) (4-hydroxy-3-methoxycinnamic acid) were carried out by using density functional (DFT/B3LYP/BLYP) method with 6-31G(d,p) as basis set. The optimized geometrical parameters obtained by DFT calculations are in good agreement with single crystal XRD data. The vibrational spectral data obtained from solid phase FT-IR and FT-Raman spectra are assigned based on the results of the theoretical calculations. The observed spectra are found to be in good agreement with calculated values. The electric dipole moment ( μ) and the first hyperpolarizability ( β) values of the investigated molecule have been computed using ab initio quantum mechanical calculations. The calculation results also show that the FA molecule might have microscopic nonlinear optical (NLO) behavior with non-zero values. A detailed interpretation of the infrared and Raman spectra of FA was also reported. The energy and oscillator strength calculated by time-dependent density functional theory (TD-DFT) results complements with the experimental findings. The calculated HOMO and LUMO energies shows that charge transfer occur within the molecule. The theoretical FT-IR and FT-Raman spectra for the title molecule have been constructed.

  3. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model

    PubMed Central

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A.; Borst, Jelmer P.; Li, Kuncheng

    2016-01-01

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network. PMID:27193284

  4. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model.

    PubMed

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A; Borst, Jelmer P; Li, Kuncheng

    2016-05-19

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network.

  5. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  6.  The application of computational chemistry to lignin

    Treesearch

    Thomas Elder; Laura Berstis; Nele Sophie Zwirchmayr; Gregg T. Beckham; Michael F. Crowley

    2017-01-01

    Computational chemical methods have become an important technique in the examination of the structure and reactivity of lignin. The calculations can be based either on classical or quantum mechanics, with concomitant differences in computational intensity and size restrictions. The current paper will concentrate on results developed from the latter type of calculations...

  7. Tree value system: users guide.

    Treesearch

    J.K. Ayer Sachet; D.G. Briggs; R.D. Fight

    1989-01-01

    This paper instructs resource analysts on use of the Tree Value System (TREEVAL). TREEVAL is a microcomputer system of programs for calculating tree or stand values and volumes based on predicted product recovery. Designed for analyzing silvicultural decisions, the system can also be used for appraisals and for evaluating log bucking. The system calculates results...

  8. ARS-Media: A spreadsheet tool for calculating media recipes based on ion-specific constraints

    USDA-ARS?s Scientific Manuscript database

    ARS-Media is an ion solution calculator that uses Microsoft Excel to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Thus, the recipes are generated using ...

  9. Prediction of Combustion Gas Deposit Compositions

    NASA Technical Reports Server (NTRS)

    Kohl, F. J.; Mcbride, B. J.; Zeleznik, F. J.; Gordon, S.

    1985-01-01

    Demonstrated procedure used to predict accurately chemical compositions of complicated deposit mixtures. NASA Lewis Research Center's Computer Program for Calculation of Complex Chemical Equilibrium Compositions (CEC) used in conjunction with Computer Program for Calculation of Ideal Gas Thermodynamic Data (PAC) and resulting Thermodynamic Data Base (THDATA) to predict deposit compositions from metal or mineral-seeded combustion processes.

  10. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  11. Head rice rate measurement based on concave point matching

    PubMed Central

    Yao, Yuan; Wu, Wei; Yang, Tianle; Liu, Tao; Chen, Wen; Chen, Chen; Li, Rui; Zhou, Tong; Sun, Chengming; Zhou, Yue; Li, Xinlu

    2017-01-01

    Head rice rate is an important factor affecting rice quality. In this study, an inflection point detection-based technology was applied to measure the head rice rate by combining a vibrator and a conveyor belt for bulk grain image acquisition. The edge center mode proportion method (ECMP) was applied for concave points matching in which concave matching and separation was performed with collaborative constraint conditions followed by rice length calculation with a minimum enclosing rectangle (MER) to identify the head rice. Finally, the head rice rate was calculated using the sum area of head rice to the overall coverage of rice. Results showed that bulk grain image acquisition can be realized with test equipment, and the accuracy rate of separation of both indica rice and japonica rice exceeded 95%. An increase in the number of rice did not significantly affect ECMP and MER. High accuracy can be ensured with MER to calculate head rice rate by narrowing down its relative error between real values less than 3%. The test results show that the method is reliable as a reference for head rice rate calculation studies. PMID:28128315

  12. Sizable band gap in organometallic topological insulator

    NASA Astrophysics Data System (ADS)

    Derakhshan, V.; Ketabi, S. A.

    2017-01-01

    Based on first principle calculation when Ceperley-Alder and Perdew-Burke-Ernzerh type exchange-correlation energy functional were adopted to LSDA and GGA calculation, electronic properties of organometallic honeycomb lattice as a two-dimensional topological insulator was calculated. In the presence of spin-orbit interaction bulk band gap of organometallic lattice with heavy metals such as Au, Hg, Pt and Tl atoms were investigated. Our results show that the organometallic topological insulator which is made of Mercury atom shows the wide bulk band gap of about ∼120 meV. Moreover, by fitting the conduction and valence bands to the band-structure which are produced by Density Functional Theory, spin-orbit interaction parameters were extracted. Based on calculated parameters, gapless edge states within bulk insulating gap are indeed found for finite width strip of two-dimensional organometallic topological insulators.

  13. Comparative evaluation of hemodynamic and respiratory parameters during mechanical ventilation with two tidal volumes calculated by demi-span based height and measured height in normal lungs.

    PubMed

    Seresht, L Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad

    2014-01-01

    Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation.

  14. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  15. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    NASA Astrophysics Data System (ADS)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  16. DFT calculation of pKa’s for dimethoxypyrimidinylsalicylic based herbicides

    NASA Astrophysics Data System (ADS)

    Delgado, Eduardo J.

    2009-03-01

    Dimethoxypyrimidinylsalicylic derived compounds show potent herbicidal activity as a result of the inhibition of acetohydroxyacid synthase, the first common enzyme in the biosynthetic pathway of the branched-chain aminoacids (valine, leucine and isoleucine) in plants, bacteria and fungi. Despite its practical importance, this family of compounds have been poorly characterized from a physico-chemical point of view. Thus for instance, their pK a's have not been reported earlier neither experimentally nor theoretically. In this study, the acid-dissociation constants of 39 dimethoxypyrimidinylsalicylic derived herbicides are calculated by DFT methods at B3LYP/6-31G(d,p) level of theory. The calculated values are validated by two checking tests based on the Hammett equation.

  17. Effect of germanium concentrations on tunnelling current calculation of Si/Si1-xGex/Si heterojunction bipolar transistor

    NASA Astrophysics Data System (ADS)

    Hasanah, L.; Suhendi, E.; Khairrurijal

    2018-05-01

    Tunelling current calculation on Si/Si1-xGex/Si heterojunction bipolar transistor was carried out by including the coupling between transversal and longitudinal components of electron motion. The calculation results indicated that the coupling between kinetic energy in parallel and perpendicular to S1-xGex barrier surface affected tunneling current significantly when electron velocity was faster than 1x105 m/s. This analytical tunneling current model was then used to study how the germanium concentration in base to Si/Si1-xGex/Si heterojunction bipolar transistor influenced the tunneling current. It is obtained that tunneling current increased as the germanium concentration given in base decreased.

  18. Calculation of the exchange coupling constants of copper binuclear systems based on spin-flip constricted variational density functional theory.

    PubMed

    Zhekova, Hristina R; Seth, Michael; Ziegler, Tom

    2011-11-14

    We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics

  19. Nature of adsorption on TiC(111) investigated with density-functional calculations

    NASA Astrophysics Data System (ADS)

    Ruberto, Carlo; Lundqvist, Bengt I.

    2007-06-01

    Extensive density-functional calculations are performed for chemisorption of atoms in the three first periods (H, B, C, N, O, F, Al, Si, P, S, and Cl) on the polar TiC(111) surface. Calculations are also performed for O on TiC(001), for full O(1×1) monolayer on TiC(111), as well as for bulk TiC and for the clean TiC(111) and (001) surfaces. Detailed results concerning atomic structures, energetics, and electronic structures are presented. For the bulk and the clean surfaces, previous results are confirmed. In addition, detailed results are given on the presence of C-C bonds in the bulk and at the surface, as well as on the presence of a Ti-based surface resonance (TiSR) at the Fermi level and of C-based surface resonances (CSR’s) in the lower part of the surface upper valence band. For the adsorption, adsorption energies Eads and relaxed geometries are presented, showing great variations characterized by pyramid-shaped Eads trends within each period. An extraordinarily strong chemisorption is found for the O atom, 8.8eV /adatom. On the basis of the calculated electronic structures, a concerted-coupling model for the chemisorption is proposed, in which two different types of adatom-substrate interactions work together to provide the obtained strong chemisorption: (i) adatom-TiSR and (ii) adatom-CSR’s. This model is used to successfully describe the essential features of the calculated Eads trends. The fundamental nature of this model, based on the Newns-Anderson model, should make it apt for general application to transition-metal carbides and nitrides and for predictive purposes in technological applications, such as cutting-tool multilayer coatings and MAX phases.

  20. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  1. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  2. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-MOX, R2-UO2 and MORGANE/R configurations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Klann, R. T.; Nuclear Engineering Division

    2007-08-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.

  3. Application of adjusted data in calculating fission-product decay energies and spectra

    NASA Astrophysics Data System (ADS)

    George, D. C.; Labauve, R. J.; England, T. R.

    1982-06-01

    The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.

  4. Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.

    PubMed

    Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth

    2015-10-13

    We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.

  5. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  6. Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel

    NASA Astrophysics Data System (ADS)

    Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa

    This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.

  7. Comment on ``Symmetry and structure of quantized vortices in superfluid 3'

    NASA Astrophysics Data System (ADS)

    Sauls, J. A.; Serene, J. W.

    1985-10-01

    Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.

  8. Calculation of the detection limits for radionuclides identified in gamma-ray spectra based on post-processing peak analysis results.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2018-03-01

    A new method for calculating the detection limits of gamma-ray spectrometry measurements is presented. The method is applicable for gamma-ray emitters, irrespective of the influences of the peaked background, the origin of the background and the overlap with other peaks. It offers the opportunity for multi-gamma-ray emitters to calculate the common detection limit, corresponding to more peaks. The detection limit is calculated by approximating the dependence of the uncertainty in the indication on its value with a second-order polynomial. In this approach the relation between the input quantities and the detection limit are described by an explicit expression and can be easy investigated. The detection limit is calculated from the data usually provided by the reports of peak-analyzing programs: the peak areas and their uncertainties. As a result, the need to use individual channel contents for calculating the detection limit is bypassed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Nonlinear optimization method of ship floating condition calculation in wave based on vector

    NASA Astrophysics Data System (ADS)

    Ding, Ning; Yu, Jian-xing

    2014-08-01

    Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.

  10. Half-Lives of Proton Emitters With a Deformed Density-Dependent Model

    NASA Astrophysics Data System (ADS)

    Qian, Yi-Bin; Ren, Zhong-Zhou; Ni, Dong-Dong; Sheng, Zong-Qiang

    2010-11-01

    Half-lives of proton radioactivity are investigated with a deformed density-dependent model. The single folding potential which is dependent on deformation and orientation is employed to calculate the proton decay width through the deformed potential barrier. In addition, the spectroscopic factor is taken into account in the calculation, which is obtained in the relativistic mean field theory with NL3. The calculated results of semi-spherical nuclei are found to be in good agreement with the experimental data, and the results of well-deformed nuclei are also satisfactory. Moreover, a formula for the spherical proton emission half-life based on the Gamow quantum tunneling theory is presented.

  11. Application of Van Der Waals Density Functional Theory to Study Physical Properties of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Conroy, M. W.; Budzevich, M. M.; Lin, Y.; Oleynik, I. I.; White, C. T.

    2009-12-01

    An empirical correction to account for van der Waals interactions based on the work of Neumann and Perrin [J. Phys. Chem. B 109, 15531 (2005)] was applied to density functional theory calculations of energetic molecular crystals. The calculated equilibrium unit-cell volumes of FOX-7, β-HMX, solid nitromethane, PETN-I, α-RDX, and TATB show a significant improvement in the agreement with experimental results. Hydrostatic-compression simulations of β-HMX, PETN-I, and α-RDX were also performed. The isothermal equations of state calculated from the results show increased agreement with experiment in the pressure intervals studied.

  12. Density functional calculations of the Mössbauer parameters in hexagonal ferrite SrFe12O19

    NASA Astrophysics Data System (ADS)

    Ikeno, Hidekazu

    2018-03-01

    Mössbauer parameters in a magnetoplumbite-type hexagonal ferrite, SrFe12O19, are computed using the all-electron band structure calculation based on the density functional theory. The theoretical isomer shift and quadrupole splitting are consistent with experimentally obtained values. The absolute values of hyperfine splitting parameters are found to be underestimated, but the relative scale can be reproduced. The present results validate the site-dependence of Mössbauer parameters obtained by analyzing experimental spectra of hexagonal ferrites. The results also show the usefulness of theoretical calculations for increasing the reliability of interpretation of the Mössbauer spectra.

  13. Solar neutrino masses and mixing from bilinear R-parity broken supersymmetry: Analytical versus numerical results

    NASA Astrophysics Data System (ADS)

    Díaz, M.; Hirsch, M.; Porod, W.; Romão, J.; Valle, J.

    2003-07-01

    We give an analytical calculation of solar neutrino masses and mixing at one-loop order within bilinear R-parity breaking supersymmetry, and compare our results to the exact numerical calculation. Our method is based on a systematic perturbative expansion of R-parity violating vertices to leading order. We find in general quite good agreement between the approximate and full numerical calculations, but the approximate expressions are much simpler to implement. Our formalism works especially well for the case of the large mixing angle Mikheyev-Smirnov-Wolfenstein solution, now strongly favored by the recent KamLAND reactor neutrino data.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, J.R.; Lu, Z.; Ring, D.M.

    We have examined a variety of structures for the {l_brace}510{r_brace} symmetric tilt boundary in Si and Ge, using tight-binding and first-principles calculations. These calculations show that the observed structure in Si is the lowest-energy structure, despite the fact that it is more complicated than what is necessary to preserve fourfold coordination. Contrary to calculations using a Tersoff potential, first-principles calculations show that the energy depends strongly upon the structure. A recently developed tight-binding model for Si produces results in very good agreement with the first-principles calculations. Electronic density of states calculations based upon this model show no evidence of midgapmore » states and little evidence of electronic states localized to the grain boundary. {copyright} {ital 1998} {ital The American Physical Society}« less

  15. Noise in x-ray grating-based phase-contrast imaging.

    PubMed

    Weber, Thomas; Bartl, Peter; Bayer, Florian; Durst, Jürgen; Haas, Wilhelm; Michel, Thilo; Ritter, André; Anton, Gisela

    2011-07-01

    Grating-based x-ray phase-contrast imaging is a fast developing new modality not only for medical imaging, but as well for other fields such as material sciences. While these many possible applications arise, the knowledge of the noise behavior is essential. In this work, the authors used a least squares fitting algorithm to calculate the noise behavior of the three quantities absorption, differential phase, and dark-field image. Further, the calculated error formula of the differential phase image was verified by measurements. Therefore, a Talbot interferometer was setup, using a microfocus x-ray tube as source and a Timepix detector for photon counting. Additionally, simulations regarding this topic were performed. It turned out that the variance of the reconstructed phase is only dependent of the total number of photons used to generate the phase image and the visibility of the experimental setup. These results could be evaluated in measurements as well as in simulations. Furthermore, the correlation between absorption and dark-field image was calculated. These results provide the understanding of the noise characteristics of grating-based phase-contrast imaging and will help to improve image quality.

  16. A multidisciplinary study of earth resources imagery of Australia, Antarctica and Papua, New Guinea

    NASA Technical Reports Server (NTRS)

    Fisher, N. H. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. A thirteen category recognition map was prepared, showing forest, water, grassland, and exposed rock types. Preliminary assessment of classification accuracies showed that water, forest, meadow, and Niobrara shale were the most accurately mapped classes. Unsatisfactory results, were obtained in an attempt to discrimate sparse forest cover over different substrates. As base elevation varied from 7,000 to 13,000 ft, with an atmospheric visibility of 48 km, no changes in water and forest recognition were observed. Granodiorite recognition accuracy decreased monotonically as base elevation increased, even though the training set location was at 10,000 ft elevation. For snow varying in base elevation from 9400 to 8420 ft, recognition decreases from 99% at the 9400 ft training set elevation to 88% at 8420 ft. Calculations of expected radiance at the ERTS sensor from snow reflectance measured at the site and from Turner model calculations of irradiance, transmission and path radiance, reveal that snow signals should not be clipped, assuming that calculations and ERTS calibration constants were correct.

  17. Collection Efficiency and Ice Accretion Characteristics of Two Full Scale and One 1/4 Scale Business Jet Horizontal Tails

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Papadakis, Michael

    2005-01-01

    Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.

  18. Adsorption of methanol molecule on graphene: Experimental results and first-principles calculations

    NASA Astrophysics Data System (ADS)

    Zhao, X. W.; Tian, Y. L.; Yue, W. W.; Chen, M. N.; Hu, G. C.; Ren, J. F.; Yuan, X. B.

    2018-04-01

    Adsorption properties of methanol molecule on graphene surface are studied both theoretically and experimentally. The adsorption geometrical structures, adsorption energies, band structures, density of states and the effective masses are obtained by means of first-principles calculations. It is found that the electronic characteristics and conductivity of graphene are sensitive to the methanol molecule adsorption. After adsorption of methanol molecule, bandgap appears. With the increasing of the adsorption distance, the bandgap, adsorption energy and effective mass of the adsorption system decreased, hence the resistivity of the system decreases gradually, these results are consistent with the experimental results. All these calculations and experiments indicate that the graphene-based sensors have a wide range of applications in detecting particular molecules.

  19. Regression analysis for solving diagnosis problem of children's health

    NASA Astrophysics Data System (ADS)

    Cherkashina, Yu A.; Gerget, O. M.

    2016-04-01

    The paper includes results of scientific researches. These researches are devoted to the application of statistical techniques, namely, regression analysis, to assess the health status of children in the neonatal period based on medical data (hemostatic parameters, parameters of blood tests, the gestational age, vascular-endothelial growth factor) measured at 3-5 days of children's life. In this paper a detailed description of the studied medical data is given. A binary logistic regression procedure is discussed in the paper. Basic results of the research are presented. A classification table of predicted values and factual observed values is shown, the overall percentage of correct recognition is determined. Regression equation coefficients are calculated, the general regression equation is written based on them. Based on the results of logistic regression, ROC analysis was performed, sensitivity and specificity of the model are calculated and ROC curves are constructed. These mathematical techniques allow carrying out diagnostics of health of children providing a high quality of recognition. The results make a significant contribution to the development of evidence-based medicine and have a high practical importance in the professional activity of the author.

  20. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less

  1. Calculated quantum yield of photosynthesis of phytoplankton in the Marine Light-Mixed Layers (59 deg N, 21 deg W)

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.

    1995-01-01

    The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.

  2. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  3. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  4. Monte Carlo calculation of the radiation field at aircraft altitudes.

    PubMed

    Roesler, S; Heinrich, W; Schraube, H

    2002-01-01

    Energy spectra of secondary cosmic rays are calculated for aircraft altitudes and a discrete set of solar modulation parameters and rigidity cut-off values covering all possible conditions. The calculations are based on the Monte Carlo code FLUKA and on the most recent information on the interstellar cosmic ray flux including a detailed model of solar modulation. Results are compared to a large variety of experimental data obtained on the ground and aboard aircraft and balloons, such as neutron, proton, and muon spectra and yields of charged particles. Furthermore, particle fluence is converted into ambient dose equivalent and effective dose and the dependence of these quantities on height above sea level, solar modulation, and geographical location is studied. Finally, calculated dose equivalent is compared to results of comprehensive measurements performed aboard aircraft.

  5. Rare-gas impurities in alkali metals: Relation to optical absorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meltzer, D.E.; Pinski, F.J.; Stocks, G.M.

    1988-04-15

    An investigation of the nature of rare-gas impurity potentials in alkali metals is performed. Results of calculations based on simple models are presented, which suggest the possibility of resonance phenomena. These could lead to widely varying values for the exponents which describe the shape of the optical-absorption spectrum at threshold in the Mahan--Nozieres--de Dominicis theory. Detailed numerical calculations are then performed with the Korringa-Kohn-Rostoker coherent-potential-approximation method. The results of these highly realistic calculations show no evidence for the resonance phenomena, and lead to predictions for the shape of the spectra which are in contradiction to observations. Absorption and emission spectramore » are calculated for two of the systems studied, and their relation to experimental data is discussed.« less

  6. Differential results integrated with continuous and discrete gravity measurements between nearby stations

    NASA Astrophysics Data System (ADS)

    Xu, Weimin; Chen, Shi; Lu, Hongyan

    2016-04-01

    Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.

  7. Sea-Level Allowances along the World Coastlines

    NASA Astrophysics Data System (ADS)

    Vandewal, R.; Tsitsikas, C.; Reerink, T.; Slangen, A.; de Winter, R.; Muis, S.; Hunter, J. R.

    2017-12-01

    Sea level changes as a result of climate change. For projections we take ocean mass changes and volume changes into account. Including gravitational and rotational fingerprints this provide regional sea level changes. Hence we can calculate sea-level rise patterns based on CMIP5 projections. In order to take the variability around the mean state, which follows from the climate models, into account we use the concept of allowances. The allowance indicates the height a coastal structure needs to be increased to maintain the likelihood of sea-level extremes. Here we use a global reanalysis of storm surges and extreme sea levels based on a global hydrodynamic model in order to calculate allowances. It is shown that the model compares in most regions favourably with tide gauge records from the GESLA data set. Combining the CMIP5 projections and the global hydrodynamical model we calculate sea-level allowances along the global coastlines and expand the number of points with a factor 50 relative to tide gauge based results. Results show that allowances increase gradually along continental margins with largest values near the equator. In general values are lower at midlatitudes both in Northern and Southern Hemisphere. Increased risk for extremes are typically 103-104 for the majority of the coastline under the RCP8.5 scenario at the end of the century. Finally we will show preliminary results of the effect of changing wave heights based on the coordinated ocean wave project.

  8. Shear, principal, and equivalent strains in equal-channel angular deformation

    NASA Astrophysics Data System (ADS)

    Xia, K.; Wang, J.

    2001-10-01

    The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.

  9. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...

  10. 40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...

  11. Electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state

    NASA Astrophysics Data System (ADS)

    Petrov, Yu V.; Inogamov, N. A.; Mokshin, A. V.; Galimzyanov, B. N.

    2018-01-01

    The electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state is calculated by using the relaxation time approach and structural factor of ions obtained by molecular dynamics simulation. Resistivity witin the Ziman-Evans approach is also considered to be higher than in the approach with previously calculated conductivity via the relaxation time. Calculations based on the construction of the ion structural factor through the classical molecular dynamics and kinetic equation for electrons are more economical in terms of computing resources and give results close to the Kubo-Greenwood with the quantum molecular dynamics calculations.

  12. Ab initio calculation of hyperfine splitting constants of molecules

    NASA Astrophysics Data System (ADS)

    Ohta, K.; Nakatsuji, H.; Hirao, K.; Yonezawa, T.

    1980-08-01

    Hyperfine splitting (hfs) constants of molecules, methyl, ethyl, vinyl, allyl, cyclopropyl, formyl, O3-, NH2, NO2, and NF2 radicals have been calculated by the pseudo-orbital (PO) theory, the unrestricted HF (UHF), projected UHF (PUHF) and single excitation (SE) CI theories. The pseudo-orbital (PO) theory is based on the symmetry-adapted-cluster (SAC) expansion proposed previously. Several contractions of the Gaussian basis sets of double-zeta accuracy have been examined. The UHF results were consistently too large to compare with experiments and the PUHF results were too small. For molecules studied here, the PO theory and SECI theory gave relatively close results. They were in fair agreement with experiments. The first-order spin-polarization self-consistency effect, which was shown to be important for atoms, is relatively small for the molecules. The present result also shows an importance of eliminating orbital-transformation dependence from conventional first-order perturbation calculations. The present calculations have explained well several important variations in the experimental hfs constants.

  13. Study of the total reaction cross section via QMD

    NASA Astrophysics Data System (ADS)

    Yang, Lin-Meng; Guo, Wen-Jun; Zhang, Fan; Ni, Sheng

    2013-10-01

    This paper presents a new empirical formula to calculate the average nucleon-nucleon (N-N) collision number for the total reaction cross sections (σR). Based on the initial average N-N collision number calculated by quantum molecular dynamics (QMD), quantum correction and Coulomb correction are taken into account within it. The average N-N collision number is calculated by this empirical formula. The total reaction cross sections are obtained within the framework of the Glauber theory. σR of 23Al+12C, 24Al+12C, 25 Al+12C, 26Al+12C and 27Al+12C are calculated in the range of low energy. We also calculate the σR of 27Al+12C with different incident energies. The calculated σR are compared with the experimental data and the results of Glauber theory including the σR of both spherical nuclear and deformed nuclear. It is seen that the calculated σR are larger than σR of spherical nuclear and smaller than σR of deformed nuclear, whereas the results agree well with the experimental data in low-energy range.

  14. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    PubMed

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  15. Cardiac Mean Electrical Axis in Thoroughbreds—Standardization by the Dubois Lead Positioning System

    PubMed Central

    da Costa, Cássia Fré; Samesima, Nelson; Pastore, Carlos Alberto

    2017-01-01

    Background Different methodologies for electrocardiographic acquisition in horses have been used since the first ECG recordings in equines were reported early in the last century. This study aimed to determine the best ECG electrodes positioning method and the most reliable calculation of mean cardiac axis (MEA) in equines. Materials and Methods We evaluated the electrocardiographic profile of 53 clinically healthy Thoroughbreds, 38 males and 15 females, with ages ranging 2–7 years old, all reared at the São Paulo Jockey Club, in Brazil. Two ECG tracings were recorded from each animal, one using the Dubois lead positioning system, the second using the base-apex method. QRS complex amplitudes were analyzed to obtain MEA values in the frontal plane for each of the two electrode positioning methods mentioned above, using two calculation approaches, the first by Tilley tables and the second by trigonometric calculation. Results were compared between the two methods. Results There was significant difference in cardiac axis values: MEA obtained by the Tilley tables was +135.1° ± 90.9° vs. -81.1° ± 3.6° (p<0.0001), and by trigonometric calculation it was -15.0° ± 11.3° vs. -79.9° ± 7.4° (p<0.0001), base-apex and Dubois, respectively. Furthermore, Dubois method presented small range of variation without statistical or clinical difference by either calculation mode, while there was a wide variation in the base-apex method. Conclusion Dubois improved centralization of the Thoroughbreds' hearts, engendering what seems to be the real frontal plane. By either calculation mode, it was the most reliable methodology to obtain cardiac mean electrical axis in equines. PMID:28095442

  16. Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80

    NASA Astrophysics Data System (ADS)

    Pruet, Jason; Fuller, George M.

    2003-11-01

    We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.

  17. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, T; Bush, K

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identifymore » the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.« less

  18. Effective Connectivity Reveals Strategy Differences in an Expert Calculator

    PubMed Central

    Minati, Ludovico; Sigala, Natasha

    2013-01-01

    Mathematical reasoning is a core component of cognition and the study of experts defines the upper limits of human cognitive abilities, which is why we are fascinated by peak performers, such as chess masters and mental calculators. Here, we investigated the neural bases of calendrical skills, i.e. the ability to rapidly identify the weekday of a particular date, in a gifted mental calculator who does not fall in the autistic spectrum, using functional MRI. Graph-based mapping of effective connectivity, but not univariate analysis, revealed distinct anatomical location of “cortical hubs” supporting the processing of well-practiced close dates and less-practiced remote dates: the former engaged predominantly occipital and medial temporal areas, whereas the latter were associated mainly with prefrontal, orbitofrontal and anterior cingulate connectivity. These results point to the effect of extensive practice on the development of expertise and long term working memory, and demonstrate the role of frontal networks in supporting performance on less practiced calculations, which incur additional processing demands. Through the example of calendrical skills, our results demonstrate that the ability to perform complex calculations is initially supported by extensive attentional and strategic resources, which, as expertise develops, are gradually replaced by access to long term working memory for familiar material. PMID:24086291

  19. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  20. Cost Analysis of MRI Services in Iran: An Application of Activity Based Costing Technique

    PubMed Central

    Bayati, Mohsen; Mahboub Ahari, Alireza; Badakhshan, Abbas; Gholipour, Mahin; Joulaei, Hassan

    2015-01-01

    Background: Considerable development of MRI technology in diagnostic imaging, high cost of MRI technology and controversial issues concerning official charges (tariffs) have been the main motivations to define and implement this study. Objectives: The present study aimed to calculate the unit-cost of MRI services using activity-based costing (ABC) as a modern cost accounting system and to fairly compare calculated unit-costs with official charges (tariffs). Materials and Methods: We included both direct and indirect costs of MRI services delivered in fiscal year 2011 in Shiraz Shahid Faghihi hospital. Direct allocation method was used for distribution of overhead costs. We used micro-costing approach to calculate unit-cost of all different MRI services. Clinical cost data were retrieved from the hospital registering system. Straight-line method was used for depreciation cost estimation. To cope with uncertainty and to increase the robustness of study results, unit costs of 33 MRI services was calculated in terms of two scenarios. Results: Total annual cost of MRI activity center (AC) was calculated at USD 400,746 and USD 532,104 based on first and second scenarios, respectively. Ten percent of the total cost was allocated from supportive departments. The annual variable costs of MRI center were calculated at USD 295,904. Capital costs measured at USD 104,842 and USD 236, 200 resulted from the first and second scenario, respectively. Existing tariffs for more than half of MRI services were above the calculated costs. Conclusion: As a public hospital, there are considerable limitations in both financial and administrative databases of Shahid Faghihi hospital. Labor cost has the greatest share of total annual cost of Shahid Faghihi hospital. The gap between unit costs and tariffs implies that the claim for extra budget from health providers may not be relevant for all services delivered by the studied MRI center. With some adjustments, ABC could be implemented in MRI centers. With the settlement of a reliable cost accounting system such as ABC technique, hospitals would be able to generate robust evidences for financial management of their overhead, intermediate and final ACs. PMID:26715979

  1. Zinc finger protein binding to DNA: an energy perspective using molecular dynamics simulation and free energy calculations on mutants of both zinc finger domains and their specific DNA bases.

    PubMed

    Hamed, Mazen Y; Arya, Gaurav

    2016-05-01

    Energy calculations based on MM-GBSA were employed to study various zinc finger protein (ZF) motifs binding to DNA. Mutants of both the DNA bound to their specific amino acids were studied. Calculated energies gave evidence for a relationship between binding energy and affinity of ZF motifs to their sites on DNA. ΔG values were -15.82(12), -3.66(12), and -12.14(11.6) kcal/mol for finger one, finger two, and finger three, respectively. The mutations in the DNA bases reduced the value of the negative energies of binding (maximum value for ΔΔG = 42Kcal/mol for F1 when GCG mutated to GGG, and ΔΔG = 22 kcal/mol for F2, the loss in total energy of binding originated in the loss in electrostatic energies upon mutation (r = .98). The mutations in key amino acids in the ZF motif in positions-1, 2, 3, and 6 showed reduced binding energies to DNA with correlation coefficients between total free energy and electrostatic was .99 and with Van der Waal was .93. Results agree with experimentally found selectivity which showed that Arginine in position-1 is specific to G, while Aspartic acid (D) in position 2 plays a complicated role in binding. There is a correlation between the MD calculated free energies of binding and those obtained experimentally for prepared ZF motifs bound to triplet bases in other reports (), our results may help in the design of ZF motifs based on the established recognition codes based on energies and contributing energies to the total energy.

  2. Theoretical and experimental studies of electronic, optical and luminescent properties for Tb-based garnet materials

    NASA Astrophysics Data System (ADS)

    Ding, Shoujun; Zhang, Haotian; Dou, Renqin; Liu, Wenpeng; Sun, Dunlu; Zhang, Qingli

    2018-07-01

    Terbium-aluminum (Tb3Al5O12: TAG) as well as Terbium-scandium-aluminum (Tb3Sc2Al3O12: TSAG) garnet materials have attracted tremendous attention around the world owing to their multifunctional applications. However, the electronic structure, optical and luminescent properties for TAG and TSAG are still requiring elucidation. To solve these intriguing problems, firstly, a systematic theoretical calculation based on the density functional theory methods were carried out on them and their electronic structure and optical properties were obtained. The calculated results indicating that both TAG and TSAG belongs to direct band gap materials category with band gap of 4.46 and 4.05 eV, respectively. Secondly, we compared the calculated results with the experimental results (including band gap, refractive index and reflectivity) and found that they were in good coincident. Lastly, we investigated the luminescence properties of TSAG and evaluated its probability for using as visible phosphor and laser matrix. In addition, a Judd-Ofelt theory calculation was performed on TSAG to reveal the radioactive transition of Tb-4f configuration and the three Judd-Ofelt intense parameters were obtained to be 4.47, 1.37 and 4.23 × 10-20 cm2, respectively. All of the obtained results can provide an essential understanding of TAG and TSAG garnet materials and also useful for the further exploration of them.

  3. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  4. Oscillating flow loss test results in Stirling engine heat exchangers

    NASA Technical Reports Server (NTRS)

    Koester, G.; Howell, S.; Wood, G.; Miller, E.; Gedeon, D.

    1990-01-01

    The results are presented for a test program designed to generate a database of oscillating flow loss information that is applicable to Stirling engine heat exchangers. The tests were performed on heater/cooler tubes of various lengths and entrance/exit configurations, on stacked and sintered screen regenerators of various wire diameters and on Brunswick and Metex random fiber regenerators. The test results were performed over a range of oscillating flow parameters consistent with Stirling engine heat exchanger experience. The tests were performed on the Sunpower oscillating flow loss rig which is based on a variable stroke and variable frequency linear drive motor. In general, the results are presented by comparing the measured oscillating flow losses to the calculated flow losses. The calculated losses are based on the cycle integration of steady flow friction factors and entrance/exit loss coefficients.

  5. High resolution, MRI-based, segmented, computerized head phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubal, I.G.; Harrell, C.R.; Smith, E.O.

    1999-01-01

    The authors have created a high-resolution software phantom of the human brain which is applicable to voxel-based radiation transport calculations yielding nuclear medicine simulated images and/or internal dose estimates. A software head phantom was created from 124 transverse MRI images of a healthy normal individual. The transverse T2 slices, recorded in a 256x256 matrix from a GE Signa 2 scanner, have isotropic voxel dimensions of 1.5 mm and were manually segmented by the clinical staff. Each voxel of the phantom contains one of 62 index numbers designating anatomical, neurological, and taxonomical structures. The result is stored as a 256x256x128 bytemore » array. Internal volumes compare favorably to those described in the ICRP Reference Man. The computerized array represents a high resolution model of a typical human brain and serves as a voxel-based anthropomorphic head phantom suitable for computer-based modeling and simulation calculations. It offers an improved realism over previous mathematically described software brain phantoms, and creates a reference standard for comparing results of newly emerging voxel-based computations. Such voxel-based computations lead the way to developing diagnostic and dosimetry calculations which can utilize patient-specific diagnostic images. However, such individualized approaches lack fast, automatic segmentation schemes for routine use; therefore, the high resolution, typical head geometry gives the most realistic patient model currently available.« less

  6. lncRNATargets: A platform for lncRNA target prediction based on nucleic acid thermodynamics.

    PubMed

    Hu, Ruifeng; Sun, Xiaobo

    2016-08-01

    Many studies have supported that long noncoding RNAs (lncRNAs) perform various functions in various critical biological processes. Advanced experimental and computational technologies allow access to more information on lncRNAs. Determining the functions and action mechanisms of these RNAs on a large scale is urgently needed. We provided lncRNATargets, which is a web-based platform for lncRNA target prediction based on nucleic acid thermodynamics. The nearest-neighbor (NN) model was used to calculate binging-free energy. The main principle of NN model for nucleic acid assumes that identity and orientation of neighbor base pairs determine stability of a given base pair. lncRNATargets features the following options: setting of a specific temperature that allow use not only for human but also for other animals or plants; processing all lncRNAs in high throughput without RNA size limitation that is superior to any other existing tool; and web-based, user-friendly interface, and colored result displays that allow easy access for nonskilled computer operators and provide better understanding of results. This technique could provide accurate calculation on the binding-free energy of lncRNA-target dimers to predict if these structures are well targeted together. lncRNATargets provides high accuracy calculations, and this user-friendly program is available for free at http://www.herbbol.org:8001/lrt/ .

  7. Radial secondary electron dose profiles and biological effects in light-ion beams based on analytical and Monte Carlo calculations using distorted wave cross sections.

    PubMed

    Wiklund, Kristin; Olivera, Gustavo H; Brahme, Anders; Lind, Bengt K

    2008-07-01

    To speed up dose calculation, an analytical pencil-beam method has been developed to calculate the mean radial dose distributions due to secondary electrons that are set in motion by light ions in water. For comparison, radial dose profiles calculated using a Monte Carlo technique have also been determined. An accurate comparison of the resulting radial dose profiles of the Bragg peak for (1)H(+), (4)He(2+) and (6)Li(3+) ions has been performed. The double differential cross sections for secondary electron production were calculated using the continuous distorted wave-eikonal initial state method (CDW-EIS). For the secondary electrons that are generated, the radial dose distribution for the analytical case is based on the generalized Gaussian pencil-beam method and the central axis depth-dose distributions are calculated using the Monte Carlo code PENELOPE. In the Monte Carlo case, the PENELOPE code was used to calculate the whole radial dose profile based on CDW data. The present pencil-beam and Monte Carlo calculations agree well at all radii. A radial dose profile that is shallower at small radii and steeper at large radii than the conventional 1/r(2) is clearly seen with both the Monte Carlo and pencil-beam methods. As expected, since the projectile velocities are the same, the dose profiles of Bragg-peak ions of 0.5 MeV (1)H(+), 2 MeV (4)He(2+) and 3 MeV (6)Li(3+) are almost the same, with about 30% more delta electrons in the sub keV range from (4)He(2+)and (6)Li(3+) compared to (1)H(+). A similar behavior is also seen for 1 MeV (1)H(+), 4 MeV (4)He(2+) and 6 MeV (6)Li(3+), all classically expected to have the same secondary electron cross sections. The results are promising and indicate a fast and accurate way of calculating the mean radial dose profile.

  8. Personalized risk communication for personalized risk assessment: Real world assessment of knowledge and motivation for six mortality risk measures from an online life expectancy calculator.

    PubMed

    Manuel, Douglas G; Abdulaziz, Kasim E; Perez, Richard; Beach, Sarah; Bennett, Carol

    2018-01-01

    In the clinical setting, previous studies have shown personalized risk assessment and communication improves risk perception and motivation. We evaluated an online health calculator that estimated and presented six different measures of life expectancy/mortality based on a person's sociodemographic and health behavior profile. Immediately after receiving calculator results, participants were invited to complete an online survey that asked how informative and motivating they found each risk measure, whether they would share their results and whether the calculator provided information they need to make lifestyle changes. Over 80% of the 317 survey respondents found at least one of six healthy living measures highly informative and motivating, but there was moderate heterogeneity regarding which measures respondents found most informative and motivating. Overall, health age was most informative and life expectancy most motivating. Approximately 40% of respondents would share the results with their clinician (44%) or social networks (38%), although the information they would share was often different from what they found informative or motivational. Online personalized risk assessment allows for a more personalized communication compared to historic paper-based risk assessment to maximize knowledge and motivation, and people should be provided a range of risk communication measures that reflect different risk perspectives.

  9. The role of the van der Waals interactions in the adsorption of anthracene and pentacene on the Ag(111) surface

    NASA Astrophysics Data System (ADS)

    Morbec, Juliana M.; Kratzer, Peter

    2017-01-01

    Using first-principles calculations based on density-functional theory (DFT), we investigated the effects of the van der Waals (vdW) interactions on the structural and electronic properties of anthracene and pentacene adsorbed on the Ag(111) surface. We found that the inclusion of vdW corrections strongly affects the binding of both anthracene/Ag(111) and pentacene/Ag(111), yielding adsorption heights and energies more consistent with the experimental results than standard DFT calculations with generalized gradient approximation (GGA). For anthracene/Ag(111) the effect of the vdW interactions is even more dramatic: we found that "pure" DFT-GGA calculations (without including vdW corrections) result in preference for a tilted configuration, in contrast to the experimental observations of flat-lying adsorption; including vdW corrections, on the other hand, alters the binding geometry of anthracene/Ag(111), favoring the flat configuration. The electronic structure obtained using a self-consistent vdW scheme was found to be nearly indistinguishable from the conventional DFT electronic structure once the correct vdW geometry is employed for these physisorbed systems. Moreover, we show that a vdW correction scheme based on a hybrid functional DFT calculation (HSE) results in an improved description of the highest occupied molecular level of the adsorbed molecules.

  10. Effect of Boundary Conditions on the Axial Compression Buckling of Homogeneous Orthotropic Composite Cylinders in the Long Column Range

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Nemeth, Michael P.; Oremont, Leonard; Jegley, Dawn C.

    2011-01-01

    Buckling loads for long isotropic and laminated cylinders are calculated based on Euler, Fluegge and Donnell's equations. Results from these methods are presented using simple parameters useful for fundamental design work. Buckling loads for two types of simply supported boundary conditions are calculated using finite element methods for comparison to select cases of the closed form solution. Results indicate that relying on Donnell theory can result in an over-prediction of buckling loads by as much as 40% in isotropic materials.

  11. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  12. Comparing Ultraviolet Spectra against Calculations: Year 2 Results

    NASA Technical Reports Server (NTRS)

    Peterson, Ruth C.

    2004-01-01

    The five-year goal of this effort is to calculate high fidelity mid-W spectra for individual stars and stellar systems for a wide range of ages, abundances, and abundance ratios. In this second year, the comparison of our calculations against observed high-resolution mid- W spectra was extended to stars as metal-rich as the Sun, and to hotter and cooler stars, further improving the list of atomic line parameters used in the calculations. We also published the application of our calculations based on the earlier list of line parameters to the observed mid-UV and optical spectra of a mildly metal-poor globular cluster in the nearby Andromeda galaxy, Messier 3 1.

  13. The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor

    ERIC Educational Resources Information Center

    Gordon, James; Chancey, Katherine

    2005-01-01

    The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…

  14. The vulnerability of electric equipment to carbon fibers of mixed lengths: An analysis

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    The susceptibility of a stereo amplifier to damage from a spectrum of lengths of graphite fibers was calculated. A simple analysis was developed by which such calculations can be based on test results with fibers of uniform lengths. A statistical analysis was applied for the conversation of data for various logical failure criteria.

  15. The Hyperfine Structure of the Ground State in the Muonic Helium Atoms

    NASA Astrophysics Data System (ADS)

    Aznabayev, D. T.; Bekbaev, A. K.; Korobov, V. I.

    2018-05-01

    Non-relativistic ionization energies 3He2+μ-e- and 4He2+μ-e- of helium-muonic atoms are calculated for ground states. The calculations are based on the variational method of the exponential expansion. Convergence of the variational energies is studied by an increasing of a number of the basis functions N. This allows to claim that the obtained energy values have 26 significant digits for ground states. With the obtained results we calculate hyperfine splitting of the muonic helium atoms.

  16. Dill: an algorithm and a symbolic software package for doing classical supersymmetry calculations

    NASA Astrophysics Data System (ADS)

    Luc̆ić, Vladan

    1995-11-01

    An algorithm is presented that formalizes different steps in a classical Supersymmetric (SUSY) calculation. Based on the algorithm Dill, a symbolic software package, that can perform the calculations, is developed in the Mathematica programming language. While the algorithm is quite general, the package is created for the 4 - D, N = 1 model. Nevertheless, with little modification, the package could be used for other SUSY models. The package has been tested and some of the results are presented.

  17. Empfangsleistung in Abhängigkeit von der Zielentfernung bei optischen Kurzstrecken-Radargeräten.

    PubMed

    Riegl, J; Bernhard, M

    1974-04-01

    The dependence of the received optical power on the range in optical short-distance radar range finders is calculated by means of the methods of geometrical optics. The calculations are based on a constant intensity of the transmitter-beam cross section and on an ideal thin lens for the receiver optics. The results are confirmed by measurements. Even measurements using a nonideal thick lens system for the receiver optics are in reasonable agreement with the calculations.

  18. Approximate calculation of multispar cantilever and semicantilever wings with parallel ribs under direct and indirect loading

    NASA Technical Reports Server (NTRS)

    Sanger, Eugen

    1932-01-01

    A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.

  19. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. SU-E-T-466: Implementation of An Extension Module for Dose Response Models in the TOPAS Monte Carlo Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Perl, J

    2015-06-15

    Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less

  1. Optimal combining of ground-based sensors for the purpose of validating satellite-based rainfall estimates

    NASA Technical Reports Server (NTRS)

    Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie

    1991-01-01

    Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.

  2. The pressure and entropy of a unitary Fermi gas with particle-hole fluctuation

    NASA Astrophysics Data System (ADS)

    Gong, Hao; Ruan, Xiao-Xia; Zong, Hong-Shi

    2018-01-01

    We calculate the pressure and entropy of a unitary Fermi gas based on universal relations combined with our previous prediction of energy which was calculated within the framework of the non-self-consistent T-matrix approximation with particle-hole fluctuation. The resulting entropy and pressure are compared with the experimental data and the theoretical results without induced interaction. For entropy, we find good agreement between our results with particle-hole fluctuation and the experimental measurements reported by ENS group and MIT experiment. For pressure, our results suffer from a systematic upshift compared to MIT data.

  3. Measurement of the Microwave Refractive Index of Materials Based on Parallel Plate Waveguides

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Pei, J.; Kan, J. S.; Zhao, Q.

    2017-12-01

    An electrical field scanning apparatus based on a parallel plate waveguide method is constructed, which collects the amplitude and phase matrices as a function of the relative position. On the basis of such data, a method for calculating the refractive index of the measured wedge samples is proposed in this paper. The measurement and calculation results of different PTFE samples reveal that the refractive index measured by the apparatus is substantially consistent with the refractive index inferred with the permittivity of the sample. The proposed refractive index calculation method proposed in this paper is a competitive method for the characterization of the refractive index of materials with positive refractive index. Since the apparatus and method can be used to measure and calculate arbitrary direction of the microwave propagation, it is believed that both of them can be applied to the negative refractive index materials, such as metamaterials or “left-handed” materials.

  4. Plume trajectory formation under stack tip self-enveloping

    NASA Astrophysics Data System (ADS)

    Gribkov, A. M.; Zroichikov, N. A.; Prokhorov, V. B.

    2017-10-01

    The phenomenon of stack tip self-enveloping and its influence upon the conditions of plume formation and on the trajectory of its motion are considered. Processes are described occurring in the initial part of the plume while the interaction between vertically directed flue gases outflowing from the stack and a horizontally directed moving air flow at high wind velocities that lead to the formation of a flag-like plume. Conditions responsible for the origin and evolution of interaction between these flows are demonstrated. For the first time, a plume formed under these conditions without bifurcation is registered. A photo image thereof is presented. A scheme for the calculation of the motion of a plume trajectory is proposed, the quantitative characteristics of which are obtained based on field observations. The wind velocity and direction, air temperature, and atmospheric turbulence at the level of the initial part of the trajectory have been obtained based on data obtained from an automatic meteorological system (mounted on the outer parts of a 250 m high stack no. 1 at the Naberezhnye Chelny TEPP plant) as well as based on the results of photographing and theodolite sighting of smoke puffs' trajectory taking into account their velocity within its initial part. The calculation scheme is supplemented with a new acting force—the force of self-enveloping. Based on the comparison of the new calculation scheme with the previous one, a significant contribution of this force to the development of the trajectory is revealed. A comparison of the natural full-scale data with the results of the calculation according to the proposed new scheme is made. The proposed calculation scheme has allowed us to extend the application of the existing technique to the range of high wind velocities. This approach would make it possible to simulate and investigate the trajectory and full rising height of the calculated the length above the mouth of flue-pipes, depending on various modal and meteorological parameters under the interrelation between the dynamic and thermal components of the rise as well as to obtain a universal calculation expression for determining the height of the plume rise for different classes of atmospheric stability.

  5. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  6. Patient‐specific CT dosimetry calculation: a feasibility study

    PubMed Central

    Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.

    2011-01-01

    Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016

  7. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    NASA Astrophysics Data System (ADS)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  8. Calculated mammographic spectra confirmed with attenuation curves for molybdenum, rhodium, and tungsten targets.

    PubMed

    Blough, M M; Waggener, R G; Payne, W H; Terry, J A

    1998-09-01

    A model for calculating mammographic spectra independent of measured data and fitting parameters is presented. This model is based on first principles. Spectra were calculated using various target and filter combinations such as molybdenum/molybdenum, molybdenum/rhodium, rhodium/rhodium, and tungsten/aluminum. Once the spectra were calculated, attenuation curves were calculated and compared to measured attenuation curves. The attenuation curves were calculated and measured using aluminum alloy 1100 or high purity aluminum filtration. Percent differences were computed between the measured and calculated attenuation curves resulting in an average of 5.21% difference for tungsten/aluminum, 2.26% for molybdenum/molybdenum, 3.35% for rhodium/rhodium, and 3.18% for molybdenum/rhodium. Calculated spectra were also compared to measured spectra from the Food and Drug Administration [Fewell and Shuping, Handbook of Mammographic X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1979)] and a comparison will also be presented.

  9. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  10. Probability calculations for three-part mineral resource assessments

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  11. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  12. A Numerical Study of the Thermal Characteristics of an Air Cavity Formed by Window Sashes in a Double Window

    NASA Astrophysics Data System (ADS)

    Kang, Jae-sik; Oh, Eun-Joo; Bae, Min-Jung; Song, Doo-Sam

    2017-12-01

    Given that the Korean government is implementing what has been termed the energy standards and labelling program for windows, window companies will be required to assign window ratings based on the experimental results of their product. Because this has added to the cost and time required for laboratory tests by window companies, the simulation system for the thermal performance of windows has been prepared to compensate for time and cost burdens. In Korea, a simulator is usually used to calculate the thermal performance of a window through WINDOW/THERM, complying with ISO 15099. For a single window, the simulation results are similar to experimental results. A double window is also calculated using the same method, but the calculation results for this type of window are unreliable. ISO 15099 should not recommend the calculation of the thermal properties of an air cavity between window sashes in a double window. This causes a difference between simulation and experimental results pertaining to the thermal performance of a double window. In this paper, the thermal properties of air cavities between window sashes in a double window are analyzed through computational fluid dynamics (CFD) simulations with the results compared to calculation results certified by ISO 15099. The surface temperature of the air cavity analyzed by CFD is compared to the experimental temperatures. These results show that an appropriate calculation method for an air cavity between window sashes in a double window should be established for reliable thermal performance results for a double window.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  14. First-Principles Study of Antimony Doping Effects on the Iron-Based Superconductor CaFe(SbxAs1-x)2

    NASA Astrophysics Data System (ADS)

    Nagai, Yuki; Nakamura, Hiroki; Machida, Masahiko; Kuroki, Kazuhiko

    2015-09-01

    We study antimony doping effects on the iron-based superconductor CaFe(SbxAs1-x)2 by using the first-principles calculation. The calculations reveal that the substitution of a doped antimony atom into As of the chainlike As layers is more stable than that into FeAs layers. This prediction can be checked by experiments. Our results suggest that doping homologous elements into the chainlike As layers, which only exist in the novel 112 system, is responsible for rising up the critical temperature. We discuss antimony doping effects on the electronic structure. It is found that the calculated band structures with and without the antimony doping are similar to each other within our framework.

  15. Theoretical Study on Vibrational Spectra, Detonation Properties and Pyrolysis Mechanism for Cyclic 2-Diazo-4,6-dinitrophenol

    NASA Astrophysics Data System (ADS)

    Li, Xiao-hong; Yin, Geng-xin; Zhang, Xian-zhou

    2012-10-01

    Based on the full optimized molecular geometrical structures at the DFT-B3LYP/6-311+G** level, there exists intramolecular hydrogen bond interaction for cyclic 2-diazo-4,6-dinitrophenol. The assigned infrared spectrum is obtained and used to compute the thermodynamic properties. The results show that there are four main characteristic regions in the calculated IR spectra of the title compound. The detonation velocities and pressures are also evaluated by using Kamlet-Jacobs equations based on the calculated density and condensed phase heat of formation. Thermal stability and the pyrolysis mechanism of 2-diazo-4,6-dinitrophenol are investigated by calculating the bond dissociation energies at the B3LYP/6-311+G** level.

  16. Boltzmann Calculations of Electron Transport in CF4 and CF_4/Ar

    NASA Astrophysics Data System (ADS)

    Wang, Yicheng; van Brunt, R. J.

    1996-10-01

    A new set of electron collisional cross sections(L. G. Christophorou, J. K. Olthoff, and M. V. V. S. Rao, J. Phys. Chem. Ref. Data, submitted (May 1996)) for CF4 has been proposed, based primarily upon available experimental measurements. In this paper we present the results of calculations of the drift velocity, ionization coefficient, and attachment coefficient for electrons in CF4 based upon the new cross section set, using a two-term Boltzmann calculation. Comparison of results with experimental determinations of the transport parameters, such as drift velocity, are presented, along with comparison of results obtained using two previously pubished(M. Hyashi, in Swarm Studies and Elastic Electron-Molecule Collisions) (1987); and Y. Nakamura in Gaseous Electronics and Their Applications (1991) electron impact cross section sets for CF_4. Additions and adjustments to the cross section sets required for the model to achieve consitency with transport data are discussed. - Research sponsored in part by the U.S. Air Force Wright Laboratory under contract F33615-96-C-2600 with the University of Tennessee. Also, Department of Physics, The University of Tennessee, Knoxville, TN.

  17. Comparison study on the calculation formula of evaporation mass flux through the plane vapour-liquid interface

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Li, Y. R.; Zhou, L. Q.; Wu, C. M.

    2017-11-01

    In order to understand the influence of various factors on the evaporation rate on the vapor-liquid interface, the evaporation process of water in pure steam environment was calculated based on the statistical rate theory (SRT), and the results were compared with those from the traditional Hertz-Knudsen equation. It is found that the evaporation rate on the vapor-liquid interface increases with the increase of evaporation temperature and evaporation temperature difference and the decrease of vapor pressure. When the steam is in a superheated state, even if the temperature of the liquid phase is lower than that of the vapor phase, the evaporation may also occur on the vapor-liquid interface; at this time, the absolute value of the critical temperature difference for occurring evaporation decreases with the increase of vapor pressure. When the evaporation temperature difference is smaller, the theoretical calculation results based on the SRT are basically the same as the predicated results from the Hertz-Knudsen equation; but the deviation between them increases with the increase of temperature difference.

  18. Considerations on methodological challenges for water footprint calculations.

    PubMed

    Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R

    2012-01-01

    We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.

  19. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors

    PubMed Central

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-01-01

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015

  20. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.

    PubMed

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-12-28

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H; Guerrero, M; Prado, K

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less

  2. SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, M; Tobias, R; Pankuch, M

    Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less

  3. Application of ATHLET/DYN3D coupled codes system for fast liquid metal cooled reactor steady state simulation

    NASA Astrophysics Data System (ADS)

    Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.

    2017-01-01

    In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).

  4. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  5. Full-Scale Model of Subionospheric VLF Signal Propagation Based on First-Principles Charged Particle Transport Calculations

    NASA Astrophysics Data System (ADS)

    Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.

    2016-12-01

    Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.

  6. SU-F-BRD-07: Fast Monte Carlo-Based Biological Optimization of Proton Therapy Treatment Plans for Thyroid Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Chan Tseung, H; Ma, J; Ma, D

    2015-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less

  7. Calculation of the force acting on a micro-sized particle with optical vortex array laser beam tweezers

    NASA Astrophysics Data System (ADS)

    Kuo, Chun-Fu; Chu, Shu-Chun

    2013-03-01

    Optical vortices possess several special properties, including carrying optical angular momentum (OAM) and exhibiting zero intensity. Vortex array laser beams have attracts many interests due to its special mesh field distributions, which show great potential in the application of multiple optical traps and dark optical traps. Previously study developed an Ince-Gaussian Mode (IGM)-based vortex array laser beam1. This study develops a simulation model based on the discrete dipole approximation (DDA) method for calculating the resultant force acting on a micro-sized spherical dielectric particle that situated at the beam waist of the IGM-based vortex array laser beams1.

  8. Property evaluations of hydrocarbon fuels under supercritical conditions based on cubic equation of state

    NASA Astrophysics Data System (ADS)

    Li, Haohan; Wu, Yong; Zeng, Xiaojun; Wang, Xiaohan; Zhao, Daiqing

    2017-06-01

    Thermophysical properties, such as density, specific heat, viscosity and thermal conductivity, vary sharply near critical point. To evaluate these properties of hydrocarbons accurately is crucial to the further research of fuel system. Comparison was made by the calculating program based on four widely used equations of state (EoS), and results indicated that calculations based on the Peng-Robinson (PR) equation of state achieve better prediction accuracy among the four equations of state. Due to the small computational amount and high accuracy, the evaluation method proposed in this paper can be implemented into practical application for the design of fuel system.

  9. Counter sniper: a localization system based on dual thermal imager

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Liu, Feihu; Wu, Zheng; Jin, Weiqi; Du, Benfang

    2010-11-01

    Sniper tactics is widely used in modern warfare, which puts forward the urgent requirement of counter sniper detection devices. This paper proposed the anti-sniper detection system based on a dual-thermal imaging system. Combining the infrared characteristics of the muzzle flash and bullet trajectory of binocular infrared images obtained by the dual-infrared imaging system, the exact location of the sniper was analyzed and calculated. This paper mainly focuses on the system design method, which includes the structure and parameter selection. It also analyzes the exact location calculation method based on the binocular stereo vision and image analysis, and give the fusion result as the sniper's position.

  10. Calculation of the electric field resulting from human body rotation in a magnetic field

    NASA Astrophysics Data System (ADS)

    Cobos Sánchez, Clemente; Glover, Paul; Power, Henry; Bowtell, Richard

    2012-08-01

    A number of recent studies have shown that the electric field and current density induced in the human body by movement in and around magnetic resonance imaging installations can exceed regulatory levels. Although it is possible to measure the induced electric fields at the surface of the body, it is usually more convenient to use numerical models to predict likely exposure under well-defined movement conditions. Whilst the accuracy of these models is not in doubt, this paper shows that modelling of particular rotational movements should be treated with care. In particular, we show that v  ×  B rather than -(v  ·  ∇)A should be used as the driving term in potential-based modelling of induced fields. Although for translational motion the two driving terms are equivalent, specific examples of rotational rigid-body motion are given where incorrect results are obtained when -(v  ·  ∇)A is employed. In addition, we show that it is important to take into account the space charge which can be generated by rotations and we also consider particular cases where neglecting the space charge generates erroneous results. Along with analytic calculations based on simple models, boundary-element-based numerical calculations are used to illustrate these findings.

  11. Effect of acidic aqueous solution on chemical and physical properties of polyamide NF membranes

    NASA Astrophysics Data System (ADS)

    Jun, Byung-Moon; Kim, Su Hwan; Kwak, Sang Kyu; Kwon, Young-Nam

    2018-06-01

    This work was systematically investigated the effects of acidic aqueous solution (15 wt% sulfuric acid as model wastewater from smelting process) on the physical and chemical properties of commercially available nanofiltration (NF) polyamide membranes, using piperazine (PIP)-based NE40/70 membranes and m-phenylene diamine (MPD)-based NE90 membrane. Surface properties of the membranes were studied before and after exposure to strong acid using various analytical tools: Scanning Electron Microscopy (SEM), Attenuated Total Reflectance-Fourier Transform Infrared spectroscopy (ATR-FTIR), X-ray photoelectron spectroscopy (XPS), Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS), contact angle analyzer, and electrophoretic light scattering spectrophotometer. The characterization and permeation results showed piperazine-based NE40/70 membranes have relatively lower acid-resistance than MPD-based NE90 membrane. Furthermore, density functional theory (DFT) calculation was also conducted to reveal the different acid-tolerances between the piperazine-based and MPD-based polyamide membranes. The easiest protonation was found to be the protonation of oxygen in piperazine-based monomer, and the N-protonation of the monomer had the lowest energy barrier in the rate determining step (RDS). The calculations were well compatible with the surface characterization results. In addition, the energy barrier in RDS is highly correlated with the twist angle (τD), which determines the delocalization of electrons between the carbonyl πCO bond and nitrogen lone pair, and the tendency of the twist angle was also maintained in longer molecules (dimer and trimer). This study clearly explained why the semi-aromatic membrane (NE40/70) is chemically less stable than the aromatic membrane (NE90) given the surface characterizations and DFT calculation results.

  12. [Forced Oscillations of DNA Bases].

    PubMed

    Yakushevich, L V; Krasnobaeva, L A

    2016-01-01

    This paper presents the results of the studying of forced angular oscillations of the DNA bases with the help of the mathematical model consisting of two coupled nonlinear differential equations that take into account the effects of dissipation and the influence of an external periodic field. The calculation results are illustrated for sequence of gene encoding interferon alpha 17 (IFNA 17).

  13. Complexity metric based on fraction of penumbra dose - initial study

    NASA Astrophysics Data System (ADS)

    Bäck, A.; Nordström, F.; Gustafsson, M.; Götstedt, J.; Karlsson Hauer, A.

    2017-05-01

    Volumetric modulated arc therapy improve radiotherapy outcome for many patients compared to conventional three dimensional conformal radiotherapy but require a more extensive, most often measurement based, quality assurance. Multi leaf collimator (MLC) aperture-based complexity metrics have been suggested to be used to distinguish complex treatment plans unsuitable for treatment without time consuming measurements. This study introduce a spatially resolved complexity score that correlate to the fraction of penumbra dose and will give information on the spatial distribution and the clinical relevance of the calculated complexity. The complexity metric is described and an initial study on the correlation between the complexity score and the difference between measured and calculated dose for 30 MLC openings is presented. The result of an analysis of the complexity scores were found to correlate to differences between measurements and calculations with a Pearson’s r-value of 0.97.

  14. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  15. Design and simulation of GaN based Schottky betavoltaic nuclear micro-battery.

    PubMed

    San, Haisheng; Yao, Shulin; Wang, Xiang; Cheng, Zaijun; Chen, Xuyuan

    2013-10-01

    The current paper presents a theoretical analysis of Ni-63 nuclear micro-battery based on a wide-band gap semiconductor GaN thin-film covered with thin Ni/Au films to form Schottky barrier for carrier separation. The total energy deposition in GaN was calculated using Monte Carlo methods by taking into account the full beta spectral energy, which provided an optimal design on Schottky barrier width. The calculated results show that an 8 μm thick Schottky barrier can collect about 95% of the incident beta particle energy. Considering the actual limitations of current GaN growth technique, a Fe-doped compensation technique by MOCVD method can be used to realize the n-type GaN with a carrier concentration of 1×10(15) cm(-3), by which a GaN based Schottky betavoltaic micro-battery can achieve an energy conversion efficiency of 2.25% based on the theoretical calculations of semiconductor device physics. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.

    2016-04-01

    This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.

  17. Experimental determination of the effective strong coupling constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandre Deur; Volker Burkert; Jian-Ping Chen

    2007-07-01

    We extract an effective strong coupling constant from low Q{sup 2} data on the Bjorken sum. Using sum rules, we establish its Q{sup 2}-behavior over the complete Q{sup 2}-range. The result is compared to effective coupling constants extracted from different processes and to calculations based on Schwinger-Dyson equations, hadron spectroscopy or lattice QCD. Although the connection between the experimentally extracted effective coupling constant and the calculations is not clear, the results agree surprisingly well.

  18. Parametric Design and Mechanical Analysis of Beams based on SINOVATION

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Shen, W. D.; Yang, D. Y.; Liu, W. M.

    2017-07-01

    In engineering practice, engineer needs to carry out complicated calculation when the loads on the beam are complex. The processes of analysis and calculation take a lot of time and the results are unreliable. So VS2005 and ADK are used to develop a software for beams design based on the 3D CAD software SINOVATION with C ++ programming language. The software can realize the mechanical analysis and parameterized design of various types of beams and output the report of design in HTML format. Efficiency and reliability of design of beams are improved.

  19. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  20. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  1. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    NASA Astrophysics Data System (ADS)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  2. Cd hyperfine interactions in DNA bases and DNA of mouse strains infected with Trypanosoma cruzi investigated by perturbed angular correlation spectroscopy and ab initio calculations.

    PubMed

    Petersen, Philippe A D; Silva, Andreia S; Gonçalves, Marcos B; Lapolli, André L; Ferreira, Ana Maria C; Carbonari, Artur W; Petrilli, Helena M

    2014-06-03

    In this work, perturbed angular correlation (PAC) spectroscopy is used to study differences in the nuclear quadrupole interactions of Cd probes in DNA molecules of mice infected with the Y-strain of Trypanosoma cruzi. The possibility of investigating the local genetic alterations in DNA, which occur along generations of mice infected with T. cruzi, using hyperfine interactions obtained from PAC measurements and density functional theory (DFT) calculations in DNA bases is discussed. A comparison of DFT calculations with PAC measurements could determine the type of Cd coordination in the studied molecules. To the best of our knowledge, this is the first attempt to use DFT calculations and PAC measurements to investigate the local environment of Cd ions bound to DNA bases in mice infected with Chagas disease. The obtained results also allowed the detection of local changes occurring in the DNA molecules of different generations of mice infected with T. cruzi, opening the possibility of using this technique as a complementary tool in the characterization of complicated biological systems.

  3. Dosimetric evaluation of intrafractional tumor motion by means of a robot driven phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richter, Anne; Wilbert, Juergen; Flentje, Michael

    2011-10-15

    Purpose: The aim of the work was to investigate the influence of intrafractional tumor motion to the accumulated (absorbed) dose. The accumulated dose was determined by means of calculations and measurements with a robot driven motion phantom. Methods: Different motion scenarios and compensation techniques were realized in a phantom study to investigate the influence of motion on image acquisition, dose calculation, and dose measurement. The influence of motion on the accumulated dose was calculated by employing two methods (a model based and a voxel based method). Results: Tumor motion resulted in a blurring of steep dose gradients and a reductionmore » of dose at the periphery of the target. A systematic variation of motion parameters allowed the determination of the main influence parameters on the accumulated dose. The key parameters with the greatest influence on dose were the mean amplitude and the pattern of motion. Investigations on necessary safety margins to compensate for dose reduction have shown that smaller safety margins are sufficient, if the developed concept with optimized margins (OPT concept) was used instead of the standard internal target volume (ITV) concept. Both calculation methods were a reasonable approximation of the measured dose with the voxel based method being in better agreement with the measurements. Conclusions: Further evaluation of available systems and algorithms for dose accumulation are needed to create guidelines for the verification of the accumulated dose.« less

  4. Modeling and calculation of impact friction caused by corner contact in gear transmission

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Chen, Siyu

    2014-09-01

    Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.

  5. Assessment of the 3He pressure inside the CABRI transient rods - Development of a surrogate model based on measurements and complementary CFD calculations

    NASA Astrophysics Data System (ADS)

    Clamens, Olivier; Lecerf, Johann; Hudelot, Jean-Pascal; Duc, Bertrand; Cadiou, Thierry; Blaise, Patrick; Biard, Bruno

    2018-01-01

    CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN) and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He) situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.

  6. Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT

    NASA Astrophysics Data System (ADS)

    Maspero, Matteo; Seevinck, Peter R.; Schubert, Gerald; Hoesl, Michaela A. U.; van Asselen, Bram; Viergever, Max A.; Lagendijk, Jan J. W.; Meijer, Gert J.; van den Berg, Cornelis A. T.

    2017-02-01

    Magnetic resonance (MR)-only radiotherapy treatment planning requires pseudo-CT (pCT) images to enable MR-based dose calculations. To verify the accuracy of MR-based dose calculations, institutions interested in introducing MR-only planning will have to compare pCT-based and computer tomography (CT)-based dose calculations. However, interpreting such comparison studies may be challenging, since potential differences arise from a range of confounding factors which are not necessarily specific to MR-only planning. Therefore, the aim of this study is to identify and quantify the contribution of factors confounding dosimetric accuracy estimation in comparison studies between CT and pCT. The following factors were distinguished: set-up and positioning differences between imaging sessions, MR-related geometric inaccuracy, pCT generation, use of specific calibration curves to convert pCT into electron density information, and registration errors. The study comprised fourteen prostate cancer patients who underwent CT/MRI-based treatment planning. To enable pCT generation, a commercial solution (MRCAT, Philips Healthcare, Vantaa, Finland) was adopted. IMRT plans were calculated on CT (gold standard) and pCTs. Dose difference maps in a high dose region (CTV) and in the body volume were evaluated, and the contribution to dose errors of possible confounding factors was individually quantified. We found that the largest confounding factor leading to dose difference was the use of different calibration curves to convert pCT and CT into electron density (0.7%). The second largest factor was the pCT generation which resulted in pCT stratified into a fixed number of tissue classes (0.16%). Inter-scan differences due to patient repositioning, MR-related geometric inaccuracy, and registration errors did not significantly contribute to dose differences (0.01%). The proposed approach successfully identified and quantified the factors confounding accurate MRI-based dose calculation in the prostate. This study will be valuable for institutions interested in introducing MR-only dose planning in their clinical practice.

  7. Assessment of Spanish Panel Reactive Antibody Calculator and Potential Usefulness.

    PubMed

    Asensio, Esther; López-Hoyos, Marcos; Romón, Íñigo; Ontañón, Jesús; San Segundo, David

    2017-01-01

    The calculated panel reactive of antibodies (cPRAs) necessary for kidney donor-pair exchange and highly sensitized programs are estimated using different panel reactive antibody (PRA) calculators based on big enough samples in Eurotransplant (EUTR), United Network for Organ Sharing (UNOS), and Canadian Transplant Registry (CTR) websites. However, those calculators can vary depending on the ethnic they are applied. Here, we develop a PRA calculator used in the Spanish Program of Transplant Access for Highly Sensitized patients (PATHI) and validate it with EUTR, UNOS, and CTR calculators. The anti-human leukocyte antigen (HLA) antibody profile of 42 sensitized patients on waiting list was defined, and cPRA was calculated with different PRA calculators. Despite different allelic frequencies derived from population differences in donor panel from each calculator, no differences in cPRA between the four calculators were observed. The PATHI calculator includes anti-DQA1 antibody profiles in cPRA calculation; however, no improvement in total cPRA calculation of highly sensitized patients was demonstrated. The PATHI calculator provides cPRA results comparable with those from EUTR, UNOS, and CTR calculators and serves as a tool to develop valid calculators in geographical and ethnic areas different from Europe, USA, and Canada.

  8. First principles investigation of structural, mechanical, dynamical and thermodynamic properties of AgMg under pressure

    NASA Astrophysics Data System (ADS)

    Cui, Rong Hua; Chao Dong, Zheng; Gui Zhong, Chong

    2017-12-01

    The effects of pressure on the structural, mechanical, dynamical and thermodynamic properties of AgMg have been investigated using first principles based on density functional theory. The optimized lattice constants agree well with previous experimental and theoretical results. The bulk modulus, shear modulus, Young’s modulus, Poisson’s ratio and Debye temperature under pressures were calculated. The calculated results of Cauchy pressure and B/G ratio indicate that AgMg shows ductile nature. Phonon dispersion curves suggest the dynamical stability of AgMg. The pressure dependent behavior of thermodynamic properties are calculated, the Helmholtz free energy and internal energy increase with increase of pressure, while entropy and heat capacity decrease.

  9. Gravimetric surveys for assessing rock mass condition around a mine shaft

    NASA Astrophysics Data System (ADS)

    Madej, Janusz

    2017-06-01

    The fundamentals of use of vertical gravimetric surveying method in mine shafts are presented in the paper. The methods of gravimetric measurements and calculation of interval and complex density are discussed in detail. The density calculations are based on an original method accounting for the gravity influence of the mine shaft thus guaranteeing closeness of calculated and real values of density of rocks beyond the shaft lining. The results of many gravimetric surveys performed in shafts are presented and interpreted. As a result, information about the location of heterogeneous zones of work beyond the shaft lining is obtained. In many cases, these zones used to threaten the safe operation of machines and utilities in the shaft.

  10. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  11. Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities

    NASA Astrophysics Data System (ADS)

    Kuzyk, Mark G.

    Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.

  12. Improvement of fire-tube boilers calculation methods by the numerical modeling of combustion processes and heat transfer in the combustion chamber

    NASA Astrophysics Data System (ADS)

    Komarov, I. I.; Rostova, D. M.; Vegera, A. N.

    2017-11-01

    This paper presents the results of study on determination of degree and nature of influence of operating conditions of burner units and flare geometric parameters on the heat transfer in a combustion chamber of the fire-tube boilers. Change in values of the outlet gas temperature, the radiant and convective specific heat flow rate with appropriate modification of an expansion angle and a flare length was determined using Ansys CFX software package. Difference between values of total heat flow and bulk temperature of gases at the flue tube outlet calculated using the known methods for thermal calculation and defined during the mathematical simulation was determined. Shortcomings of used calculation methods based on the results of a study conducted were identified and areas for their improvement were outlined.

  13. A new method for calculating ecological flow: Distribution flow method

    NASA Astrophysics Data System (ADS)

    Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei

    2018-04-01

    A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.

  14. Calculation and experimental validation of spectral properties of microsize grains surrounded by nanoparticles.

    PubMed

    Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong

    2014-04-07

    Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.

  15. LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays

    PubMed Central

    2011-01-01

    Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655

  16. Scaling Atomic Partial Charges of Carbonate Solvents for Lithium Ion Solvation and Diffusion

    DOE PAGES

    Chaudhari, Mangesh I.; Nair, Jijeesh R.; Pratt, Lawrence R.; ...

    2016-10-21

    Lithium-ion solvation and diffusion properties in ethylene carbonate (EC) and propylene carbonate (PC) were studied by molecular simulation, experiments, and electronic structure calculations. Studies carried out in water provide a reference for interpretation. Classical molecular dynamics simulation results are compared to ab initio molecular dynamics to assess nonpolarizable force field parameters for solvation structure of the carbonate solvents. Quasi-chemical theory (QCT) was adapted to take advantage of fourfold occupancy of the near-neighbor solvation structure observed in simulations and used to calculate solvation free energies. The computed free energy for transfer of Li + to PC from water, based on electronicmore » structure calculations with cluster-QCT, agrees with the experimental value. The simulation-based direct-QCT results with scaled partial charges agree with the electronic structure-based QCT values. The computed Li +/PF 6 - transference numbers of 0.35/0.65 (EC) and 0.31/0.69 (PC) agree well with NMR experimental values of 0.31/0.69 (EC) and 0.34/0.66 (PC) and similar values obtained here with impedance spectroscopy. These combined results demonstrate that solvent partial charges can be scaled in systems dominated by strong electrostatic interactions to achieve trends in ion solvation and transport properties that are comparable to ab initio and experimental results. Thus, the results support the use of scaled partial charges in simple, nonpolarizable force fields in future studies of these electrolyte solutions.« less

  17. Development of a Knowledge Base of Ti-Alloys From First-Principles and Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Marker, Cassie

    An aging population with an active lifestyle requires the development of better load-bearing implants, which have high levels of biocompatibility and a low elastic modulus. Titanium alloys, in the body centered cubic phase, are great implant candidates, due to their mechanical properties and biocompatibility. The present work aims at investigating the thermodynamic and elastic properties of bcc Tialloys, using the integrated first-principles based on Density Functional Theory (DFT) and the CALculation of PHAse Diagrams (CALPHAD) method. The use of integrated first-principles calculations based on DFT and CALPHAD modeling has greatly reduced the need for trial and error metallurgy, which is ineffective and costly. The phase stability of Ti-alloys has been shown to greatly affect their elastic properties. Traditionally, CALPHAD modeling has been used to predict the equilibrium phase formation, but in the case of Ti-alloys, predicting the formation of two metastable phases o and alpha" is of great importance as these phases also drastically effect the elastic properties. To build a knowledge base of Ti-alloys, for biomedical load-bearing implants, the Ti-Mo-Nb-Sn-Ta-Zr system was studied because of the biocompatibility and the bcc stabilizing effects of some of the elements. With the focus on bcc Ti-rich alloys, a database of thermodynamic descriptions of each phase for the pure elements, binary and Ti-rich ternary alloys was developed in the present work. Previous thermodynamic descriptions for the pure elements were adopted from the widely used SGTE database for global compatibility. The previous binary and ternary models from the literature were evaluated for accuracy and new thermodynamic descriptions were developed when necessary. The models were evaluated using available experimental data, as well as the enthalpy of formation of the bcc phase obtained from first-principles calculations based on DFT. The thermodynamic descriptions were combined into a database ensuring that the sublattice models are compatible with each other. For subsystems, such as the Sn-Ta system, where no thermodynamic description had been evaluated and minimal experimental data was available, first-principles calculations based on DFT were used. The Sn-Ta system has two intermetallic phases, TaSn2 and Ta3Sn, with three solution phases: bcc, body centered tetragonal (bct) and diamond. First-principles calculations were completed on the intermetallic and solution phases. Special quasirandom structures (SQS) were used to obtain information about the solution phases across the entire composition range. The Debye-Gruneisen approach, as well as the quasiharmonic phonon method, were used to obtain the finite-temperature data. Results from the first-principles calculations and experiments were used to complete the thermodynamic description. The resulting phase diagram reproduced the first-principles calculations and experimental data accurately. In order to determine the effect of alloying on the elastic properties, first-principles calculations based on DFT were systematically done on the pure elements, five Ti-X binary systems and Ti-X-Y ternary systems (X ≠ Y = Mo, Nb, Sn, Ta Zr) in the bcc phase. The first-principles calculations predicted the single crystal elastic stiffness constants cij 's. Correspondingly, the polycrystalline aggregate properties were also estimated from the cij's, including bulk modulus B, shear modulus G and Young's modulus E. The calculated results showed good agreement with experimental results. The CALPHAD method was then adapted to assist in the database development of the elastic properties as a function of composition. On average, the database predicted the elastic properties of higher order Ti-alloys within 5 GPa of the experimental results. Finally, the formation of the metastable phases, o and alpha" was studied in the Ti-Ta and Ti-Nb systems. The formation energy of these phases, calculated from first-principles at 0 K, showed that the phases have similar formation energies to the bcc and hcp phases. Inelastic neutron scattering was completed on four different Ti-Nb compositions to study the entropy of the phases as well as the transformations occurring when the phases form and the phase fractions. Ongoing work is being done to use the experimental information to introduce thermodynamic descriptions for these two phases in the Ti-Nb system in order to be able to predict the formation and phase fractions. DFT based first-principles were used to predict the effect these phases have on the elastic properties and a rule of mixtures was used to determine the elastic properties of multi-phase alloys. The results were compared with experiments and showed that if the ongoing modeling can predict the phase fraction, the elastic database can accurately predict the elastic properties of the o and alpha" phases. This thesis provides a knowledge base of the thermodynamic and elastic properties of Ti-alloys from computational thermodynamics. The databases created will impact research activities on Ti-alloys and specifically efforts focused on Ti-alloys for biomedical applications.

  18. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs.

    PubMed

    Liu, Kuan-Yu; Herbert, John M

    2017-10-28

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  19. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs

    NASA Astrophysics Data System (ADS)

    Liu, Kuan-Yu; Herbert, John M.

    2017-10-01

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  20. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  1. Microscopic Study of the 6Li(p, α)3He Reaction at Low Energies

    NASA Astrophysics Data System (ADS)

    Solovyev, Alexander; Igashov, Sergey

    2018-01-01

    The 6Li(p, α)3He reaction important for nuclear astrophysics is studied in the framework of a microscopic approach based on a multichannel algebraic version of the resonating group model. Astrophysical S-factor for the reaction is calculated at low energies. The obtained result is compared with experimental data and other theoretical calculations.

  2. Accuracy of Standing-Tree Volume Estimates Based on McClure Mirror Caliper Measurements

    Treesearch

    Noel D. Cost

    1971-01-01

    The accuracy of standing-tree volume estimates, calculated from diameter measurements taken by a mirror caliper and with sectional aluminum poles for height control, was compared with volume estimates calculated from felled-tree measurements. Twenty-five trees which varied in species, size, and form were used in the test. The results showed that two estimates of total...

  3. On the vertical resolution for near-nadir looking spaceborne rain radar

    NASA Astrophysics Data System (ADS)

    Kozu, Toshiaki

    A definition of radar resolution for an arbitrary direction is proposed and used to calculate the vertical resolution for a near-nadir looking spaceborne rain radar. Based on the calculation result, a scanning strategy is proposed which efficiently distributes the measurement time to each angle bin and thus increases the number of independent samples compared with a simple linear scanning.

  4. Task constraints and minimization of muscle effort result in a small number of muscle synergies during gait.

    PubMed

    De Groote, Friedl; Jonkers, Ilse; Duysens, Jacques

    2014-01-01

    Finding muscle activity generating a given motion is a redundant problem, since there are many more muscles than degrees of freedom. The control strategies determining muscle recruitment from a redundant set are still poorly understood. One theory of motor control suggests that motion is produced through activating a small number of muscle synergies, i.e., muscle groups that are activated in a fixed ratio by a single input signal. Because of the reduced number of input signals, synergy-based control is low dimensional. But a major criticism on the theory of synergy-based control of muscles is that muscle synergies might reflect task constraints rather than a neural control strategy. Another theory of motor control suggests that muscles are recruited by optimizing performance. Optimization of performance has been widely used to calculate muscle recruitment underlying a given motion while assuming independent recruitment of muscles. If synergies indeed determine muscle recruitment underlying a given motion, optimization approaches that do not model synergy-based control could result in muscle activations that do not show the synergistic muscle action observed through electromyography (EMG). If, however, synergistic muscle action results from performance optimization and task constraints (joint kinematics and external forces), such optimization approaches are expected to result in low-dimensional synergistic muscle activations that are similar to EMG-based synergies. We calculated muscle recruitment underlying experimentally measured gait patterns by optimizing performance assuming independent recruitment of muscles. We found that the muscle activations calculated without any reference to synergies can be accurately explained by on average four synergies. These synergies are similar to EMG-based synergies. We therefore conclude that task constraints and performance optimization explain synergistic muscle recruitment from a redundant set of muscles.

  5. Determination of the distribution constants of aromatic compounds and steroids in biphasic micellar phosphonium ionic liquid/aqueous buffer systems by capillary electrokinetic chromatography.

    PubMed

    Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K

    2013-09-20

    The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Hyperspherical close-coupling calculations for charge-transfer cross sections in He2++H(1s) collisions at low energies

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Nan; Le, Anh-Thu; Morishita, Toru; Esry, B. D.; Lin, C. D.

    2003-05-01

    A theory for ion-atom collisions at low energies based on the hyperspherical close-coupling (HSCC) method is presented. In hyperspherical coordinates the wave function is expanded in analogy to the Born-Oppenheimer approximation where the adiabatic channel functions are calculated with B-spline basis functions while the coupled hyperradial equations are solved by a combination of R-matrix propagation and the slow/smooth variable discretization method. The HSCC method is applied to calculate charge-transfer cross sections for He2++H(1s)→He+(n=2)+H+ reactions at center-of-mass energies from 10 eV to 4 keV. The results are shown to be in general good agreement with calculations based on the molecular orbital (MO) expansion method where electron translation factors (ETF’s) or switching functions have been incorporated in each MO. However, discrepancies were found at very low energies. It is shown that the HSCC method can be used to study low-energy ion-atom collisions without the need to introduce the ad hoc ETF’s, and the results are free from ambiguities associated with the traditional MO expansion approach.

  7. Implementation of Online Promethee Method for Poor Family Change Rate Calculation

    NASA Astrophysics Data System (ADS)

    Aji, Dhady Lukito; Suryono; Widodo, Catur Edi

    2018-02-01

    This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.

  8. Prediction and Analysis of CO2 Emission in Chongqing for the Protection of Environment and Public Health

    PubMed Central

    Yang, Shuai; Wang, Yu; Ao, Wengang; Bai, Yun; Li, Chuan

    2018-01-01

    Based on the consumption of fossil energy, the CO2 emissions of Chongqing are calculated and analyzed from 1997 to 2015 in this paper. Based on the calculation results, the consumption of fossil fuels and the corresponding CO2 emissions of Chongqing in 2020 are predicted, and the supporting data and corresponding policies are provided for the government of Chongqing to reach its goal as the economic unit of low-carbon emission in the ‘13th Five-Year Plan’. The results of the analysis show that there is a rapid decreasing trend of CO2 emissions in Chongqing during the ‘12th Five-Year Plan’, which are caused by the adjustment policy of the energy structure in Chongqing. Therefore, the analysis and prediction are primarily based on the adjustment of Chongqing’s coal energy consumption in this paper. At the initial stage, support vector regression (SVR) method is applied to predict the other fossil energy consumption and the corresponding CO2 emissions of Chongqing in 2020. Then, with the energy intensity of 2015 and the official target of CO2 intensity in 2020, the total fossil energy consumption and CO2 emissions of Chongqing in 2020 are predicted respectively. By the above results of calculation, the coal consumption and its corresponding CO2 emissions of Chongqing in 2020 are determined. To achieve the goal of CO2 emissions of Chongqing in 2020, the coal consumption level and energy intensity of Chongqing are calculated, and the adjustment strategies for energy consumption structure in Chongqing are proposed. PMID:29547505

  9. Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis

    NASA Astrophysics Data System (ADS)

    Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro

    2017-04-01

    The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191

  10. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  11. Neutron skyshine calculations for the PDX tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, F.J.; Nigg, D.W.

    1979-01-01

    The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less

  12. Search for promising compositions for developing new multiphase casting alloys based on Al-Cu-Mg matrix using thermodynamic calculations and mathematic simulation

    NASA Astrophysics Data System (ADS)

    Zolotorevskii, V. S.; Pozdnyakov, A. V.; Churyumov, A. Yu.

    2012-11-01

    A calculation-experimental study is carried out to improve the concept of searching for new alloying systems in order to develop new casting alloys using mathematical simulation methods in combination with thermodynamic calculations. The results show the high effectiveness of the applied methods. The real possibility of selecting the promising compositions with the required set of casting and mechanical properties is exemplified by alloys with thermally hardened Al-Cu and Al-Cu-Mg matrices, as well as poorly soluble additives that form eutectic components using mainly the calculation study methods and the minimum number of experiments.

  13. Understanding the photoluminescence characteristics of Eu{sup 3+}-doped double-perovskite by electronic structure calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Binita; Halder, Saswata; Sinha, T. P.

    2016-05-23

    Europium-doped luminescent barium samarium tantalum oxide Ba{sub 2}SmTaO{sub 6} (BST) has been investigated by first-principles calculation, and the crystal structure, electronic structure, and optical properties of pure BST and Eu-doped BST have been examined and compared. Based on the calculated results, the luminescence properties and mechanism of Eu-doped BST has been discussed. In the case of Eu-doped BST, there is an impurity energy band at the Fermi level, which is formed by seven spin up energy levels of Eu and act as the luminescent centre, which is evident from the band structure calculations.

  14. Optical properties of B12P2 crystals: Ab initio calculation and EELS

    NASA Astrophysics Data System (ADS)

    Reshetniak, V. V.; Mavrin, B. N.; Medvedev, V. V.; Perezhogin, I. A.; Kulnitskiy, B. A.

    2018-05-01

    We report an experimental and theoretical investigation of the electronic structure and optical properties of B12P2 crystals in the energy range up to 60 eV. Experimental studies are performed by the method of electron energy loss spectroscopy, and theoretical studies are carried out using density functional theory and the GW approximation. The calculated dependence of the energy loss function is in agreement with the experiment. Based on the results of the calculations, we determine the optical properties of B12P2 crystals and investigate their anisotropy. The dispersion and density of electronic states are calculated and analyzed.

  15. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study.

    PubMed

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-21

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.

  16. Assessment of Some Atomization Models Used in Spray Calculations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Bulzin, Dan

    2011-01-01

    The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.

  17. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  18. Estimation of Critical Gap Based on Raff's Definition

    PubMed Central

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160

  19. Estimation of critical gap based on Raff's definition.

    PubMed

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.

  20. A colored petri nets based workload evaluation model and its validation through Multi-Attribute Task Battery-II.

    PubMed

    Wang, Peng; Fang, Weining; Guo, Beiyuan

    2017-04-01

    This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  2. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  3. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  4. Accelerating calculations of RNA secondary structure partition functions using GPUs

    PubMed Central

    2013-01-01

    Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434

  5. Relativistic impulse approximation analysis of unstable calcium isotopes: {sup 60-74}Ca

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaki, K.

    2009-06-15

    Recent relativistic mean-field calculations have provided nuclear distributions of Ca isotopes whose mass numbers are 60 through 74. We calculate observables of proton elastic scattering from these unstable isotopes and discuss relations between observables and nuclear distributions of such unstable nuclei. The calculations are based on relativistic impulse approximation (RIA) at incident proton energies from 100 through 500 MeV where predictions of RIA have been shown to provide good agreement with experimental data. To validate the use of optimal factorization and first-order calculations at these energies, contributions from the Fermi motion of the target nuclei and multiple scattering are estimatedmore » and compared with results calculated without these effects.« less

  6. Discovery of a diamond-based photonic crystal structure in beetle scales.

    PubMed

    Galusha, Jeremy W; Richey, Lauren R; Gardner, John S; Cha, Jennifer N; Bartl, Michael H

    2008-05-01

    We investigated the photonic crystal structure inside iridescent scales of the weevil Lamprocyphus augustus. By combining a high-resolution structure analysis technique based on sequential focused ion beam milling and scanning electron microscopy imaging with theoretical modeling and photonic band-structure calculations, we discovered a natural three-dimensional photonic structure with a diamond-based crystal lattice operating at visible wavelengths. Moreover, we found that within individual scales, the diamond-based structure is assembled in the form of differently oriented single-crystalline micrometer-sized pixels with only selected lattice planes facing the scales' top surface. A comparison of results obtained from optical microreflectance measurements with photonic band-structure calculations reveals that it is this sophisticated microassembly of the diamond-based crystal lattice that lends Lamprocyphus augustus its macroscopically near angle-independent green coloration.

  7. Forecasting burden of long-term disability from neonatal conditions: results from the Projahnmo I trial, Sylhet, Bangladesh.

    PubMed

    Shillcutt, Samuel D; Lefevre, Amnesty E; Lee, Anne C C; Baqui, Abdullah H; Black, Robert E; Darmstadt, Gary L

    2013-07-01

    The burden of disease resulting from neonatal conditions is substantial in developing countries. From 2003 to 2005, the Projahnmo I programme delivered community-based interventions for maternal and newborn health in Sylhet, Bangladesh. This analysis quantifies burden of disability and incorporates non-fatal outcomes into cost-effectiveness analysis of interventions delivered in the Projahnmo I programme. A decision tree model was created to predict disability resulting from preterm birth, neonatal meningitis and intrapartum-related hypoxia ('birth asphyxia'). Outcomes were defined as the years lost to disability (YLD) component of disability-adjusted life years (DALYs). Calculations were based on data from the Projahnmo I trial, supplemented with values from published literature and expert opinion where data were absent. 195 YLD per 1000 neonates [95% confidence interval (CI): 157-241] were predicted in the main calculation, sensitive to different DALY assumptions, disability weights and alternative model structures. The Projahnmo I home care intervention may have averted 2.0 (1.3-2.8) YLD per 1000 neonates. Compared with calculations based on reductions in mortality alone, the cost-effectiveness ratio decreased by only 0.6% from $105.23 to $104.62 ($65.15-$266.60) when YLD were included, with 0.6% more DALYs averted [total 338/1000 (95% CI: 131-542)]. A significant burden of disability results from neonatal conditions in Sylhet, Bangladesh. Adding YLD has very little impact on recommendations based on cost-effectiveness, even at the margin of programme adoption. This model provides guidance for collecting data on disabilities in new settings.

  8. Bayesian pretest probability estimation for primary malignant bone tumors based on the Surveillance, Epidemiology and End Results Program (SEER) database.

    PubMed

    Benndorf, Matthias; Neubauer, Jakob; Langer, Mathias; Kotter, Elmar

    2017-03-01

    In the diagnostic process of primary bone tumors, patient age, tumor localization and to a lesser extent sex affect the differential diagnosis. We therefore aim to develop a pretest probability calculator for primary malignant bone tumors based on population data taking these variables into account. We access the SEER (Surveillance, Epidemiology and End Results Program of the National Cancer Institute, 2015 release) database and analyze data of all primary malignant bone tumors diagnosed between 1973 and 2012. We record age at diagnosis, tumor localization according to the International Classification of Diseases (ICD-O-3) and sex. We take relative probability of the single tumor entity as a surrogate parameter for unadjusted pretest probability. We build a probabilistic (naïve Bayes) classifier to calculate pretest probabilities adjusted for age, tumor localization and sex. We analyze data from 12,931 patients (647 chondroblastic osteosarcomas, 3659 chondrosarcomas, 1080 chordomas, 185 dedifferentiated chondrosarcomas, 2006 Ewing's sarcomas, 281 fibroblastic osteosarcomas, 129 fibrosarcomas, 291 fibrous malignant histiocytomas, 289 malignant giant cell tumors, 238 myxoid chondrosarcomas, 3730 osteosarcomas, 252 parosteal osteosarcomas, 144 telangiectatic osteosarcomas). We make our probability calculator accessible at http://ebm-radiology.com/bayesbone/index.html . We provide exhaustive tables for age and localization data. Results from tenfold cross-validation show that in 79.8 % of cases the pretest probability is correctly raised. Our approach employs population data to calculate relative pretest probabilities for primary malignant bone tumors. The calculator is not diagnostic in nature. However, resulting probabilities might serve as an initial evaluation of probabilities of tumors on the differential diagnosis list.

  9. Self-organized ferromagnetic nanowires in MgO-based magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Seike, Masayoshi; Fukushima, Tetsuya; Sato, Kazunori; Katayama-Yoshida, Hiroshi

    2013-08-01

    The focus of this study is to examine the distribution of defects and defect-induced properties in MgO-based magnetic tunnel junctions (MTJs). To this end, first-principles calculations were performed to estimate the electronic structures and total energies of MgO with various defects by using the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional. From connections drawn between the calculated results and previously reported experimental data, we propose that self-organized ferromagnetic nanowires of magnesium vacancies can be formed in MgO-based MTJs. This self-organization may provide the foundation for a comprehensive understanding of the conductivity, tunnel barriers and quantum oscillations of MgO-based MTJs. Further experimental verification is needed before firm conclusions can be drawn.

  10. Evaluation of steam sterilization processes: comparing calculations using temperature data and biointegrator reduction data and calculation of theoretical temperature difference.

    PubMed

    Lundahl, Gunnel

    2007-01-01

    When calculating of the physical F121.1 degrees c-value by the equation F121.1 degrees C = t x 10(T-121.1/z the temperature (T), in combination with the z-value, influences the F121.1 degrees c-value exponentially. Because the z-value for spores of Geobacillus stearothermophilus often varies between 6 and 9, the biological F-value (F(Bio) will not always correspond to the F0-value based on temperature records from the sterilization process calculated with a z-value of 10, even if the calibration of both of them are correct. Consequently an error in calibration of thermocouples and difference in z-values influences the F121.1 degrees c-values logarithmically. The paper describes how results from measurements with different z-values can be compared. The first part describes the mathematics of a calculation program, which makes it easily possible to compare F0-values based on temperature records with the F(BIO)-value based on analysis of bioindicators such as glycerin-water-suspension sensors. For biological measurements, a suitable bioindicator with a high D121-value can be used (such a bioindicator can be manufactured as described in the article "A Method of Increasing Test Range and Accuracy of Bioindicators-Geobacillus stearothermophilus Spores"). By the mathematics and calculations described in this macro program it is possible to calculate for every position the theoretical temperature difference (deltaT(th)) needed to explain the difference in results between the thermocouple and the biointegrator. Since the temperature difference is a linear function and constant all over the process this value is an indication of the magnitude of an error. A graph and table from these calculations gives a picture of the run. The second part deals with product characteristics, the sterilization processes, loading patterns. Appropriate safety margins have to be chosen in the development phase of a sterilization process to achieve acceptable safety limits. Case studies are discussed and experiences are shared.

  11. Real-time acquisition and preprocessing system of transient electromagnetic data based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Zhao, Huinan; Zhang, Shuang; Gu, Lingjia; Sun, Jian

    2014-09-01

    Transient electromagnetic method (TEM) is regarded as an everlasting issue for geological exploration. It is widely used in many research fields, such as mineral exploration, hydrogeology survey, engineering exploration and unexploded ordnance detection. The traditional measurement systems are often based on ARM DSP or FPGA, which have not real-time display, data preprocessing and data playback functions. In order to overcome the defects, a real-time data acquisition and preprocessing system based on LabVIEW virtual instrument development platform is proposed in the paper, moreover, a calibration model is established for TEM system based on a conductivity loop. The test results demonstrated that the system can complete real-time data acquisition and system calibration. For Transmit-Loop-Receive (TLR) response, the correlation coefficient between the measured results and the calculated results is 0.987. The measured results are basically consistent with the calculated results. Through the late inversion process for TLR, the signal of underground conductor was obtained. In the complex test environment, abnormal values usually exist in the measured data. In order to solve this problem, the judgment and revision algorithm of abnormal values is proposed in the paper. The test results proved that the proposed algorithm can effectively eliminate serious disturbance signals from the measured transient electromagnetic data.

  12. Determination of the Optimal Fourier Number on the Dynamic Thermal Transmission

    NASA Astrophysics Data System (ADS)

    Bruzgevičius, P.; Burlingis, A.; Norvaišienė, R.

    2016-12-01

    This article represents the result of experimental research on transient heat transfer in a multilayered (heterogeneous) wall. Our non-steady thermal transmission simulation is based on a finite-difference calculation method. The value of a Fourier number shows the similarity of thermal variation in conditional layers of an enclosure. Most scientists recommend using no more than a value of 0.5 for the Fourier number when performing calculations on dynamic (transient) heat transfer. The value of the Fourier number is determined in order to acquire reliable calculation results with optimal accuracy. To compare the results of simulation with experimental research, a transient heat transfer calculation spreadsheet was created. Our research has shown that a Fourier number of around 0.5 or even 0.32 is not sufficient ({≈ }17 % of oscillation amplitude) for calculations of transient heat transfer in a multilayered wall. The least distorted calculation results were obtained when the multilayered enclosure was divided into conditional layers with almost equal Fourier number values and when the value of the Fourier number was around 1/6, i.e., approximately 0.17. Statistical deviation analysis using the Statistical Analysis System was applied to assess the accuracy of the spreadsheet calculation and was developed on the basis of our established methodology. The mean and median absolute error as well as their confidence intervals has been estimated by the two methods with optimal accuracy ({F}_{oMDF}= 0.177 and F_{oEPS}= 0.1633 values).

  13. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R

    PubMed Central

    Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763

  14. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    PubMed

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  15. Improving the accuracy of Density Functional Theory (DFT) calculation for homolysis bond dissociation energies of Y-NO bond: generalized regression neural network based on grey relational analysis and principal component analysis.

    PubMed

    Li, Hong Zhi; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2011-01-01

    We propose a generalized regression neural network (GRNN) approach based on grey relational analysis (GRA) and principal component analysis (PCA) (GP-GRNN) to improve the accuracy of density functional theory (DFT) calculation for homolysis bond dissociation energies (BDE) of Y-NO bond. As a demonstration, this combined quantum chemistry calculation with the GP-GRNN approach has been applied to evaluate the homolysis BDE of 92 Y-NO organic molecules. The results show that the ull-descriptor GRNN without GRA and PCA (F-GRNN) and with GRA (G-GRNN) approaches reduce the root-mean-square (RMS) of the calculated homolysis BDE of 92 organic molecules from 5.31 to 0.49 and 0.39 kcal mol(-1) for the B3LYP/6-31G (d) calculation. Then the newly developed GP-GRNN approach further reduces the RMS to 0.31 kcal mol(-1). Thus, the GP-GRNN correction on top of B3LYP/6-31G (d) can improve the accuracy of calculating the homolysis BDE in quantum chemistry and can predict homolysis BDE which cannot be obtained experimentally.

  16. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H; Guerrero, M; Chen, S

    Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less

  18. Child-Level Predictors of Responsiveness to Evidence-Based Mathematics Intervention.

    PubMed

    Powell, Sarah R; Cirino, Paul T; Malone, Amelia S

    2017-07-01

    We identified child-level predictors of responsiveness to 2 types of mathematics (calculation and word-problem) intervention among 2nd-grade children with mathematics difficulty. Participants were 250 children in 107 classrooms in 23 schools pretested on mathematics and general cognitive measures and posttested on mathematics measures. We assigned classrooms randomly assigned to calculation intervention, word-problem intervention, or business-as-usual control. Intervention lasted 17 weeks. Path analyses indicated that scores on working memory and language comprehension assessments moderated responsiveness to calculation intervention. No moderators were identified for responsiveness to word-problem intervention. Across both intervention groups and the control group, attentive behavior predicted both outcomes. Initial calculation skill predicted the calculation outcome, and initial language comprehension predicted word-problem outcomes. These results indicate that screening for calculation intervention should include a focus on working memory, language comprehension, attentive behavior, and calculations. Screening for word-problem intervention should focus on attentive behavior and word problems.

  19. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  20. A novel method for calculating and measuring the second-order buoyancy experienced by a magnet immersed in magnetic fluid

    NASA Astrophysics Data System (ADS)

    Yu, Jun; Hao, Du; Li, Decai

    2018-01-01

    The phenomenon whereby an object whose density is greater than magnetic fluid can be suspended stably in magnetic fluid under the magnetic field is one of the peculiar properties of magnetic fluids. Examples of applications based on the peculiar properties of magnetic fluid are sensors and actuators, dampers, positioning systems and so on. Therefore, the calculation and measurement of magnetic levitation force of magnetic fluid is of vital importance. This paper concerns the peculiar second-order buoyancy experienced by a magnet immersed in magnetic fluid. The expression for calculating the second-order buoyancy was derived, and a novel method for calculating and measuring the second-order buoyancy was proposed based on the expression. The second-order buoyancy was calculated by ANSYS and measured experimentally using the novel method. To verify the novel method, the second-order buoyancy was measured experimentally with a nonmagnetic rod stuck on the top surface of the magnet. The results of calculations and experiments show that the novel method for calculating the second-order buoyancy is correct with high accuracy. In addition, the main causes of error were studied in this paper, including magnetic shielding of magnetic fluid and the movement of magnetic fluid in a nonuniform magnetic field.

  1. Explicit area-based accuracy assessment for mangrove tree crown delineation using Geographic Object-Based Image Analysis (GEOBIA)

    NASA Astrophysics Data System (ADS)

    Kamal, Muhammad; Johansen, Kasper

    2017-10-01

    Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  3. An analytical model for calculating microdosimetric distributions from heavy ions in nanometer site targets.

    PubMed

    Czopyk, L; Olko, P

    2006-01-01

    The analytical model of Xapsos used for calculating microdosimetric spectra is based on the observation that straggling of energy loss can be approximated by a log-normal distribution of energy deposition. The model was applied to calculate microdosimetric spectra in spherical targets of nanometer dimensions from heavy ions at energies between 0.3 and 500 MeV amu(-1). We recalculated the originally assumed 1/E(2) initial delta electrons spectrum by applying the Continuous Slowing Down Approximation for secondary electrons. We also modified the energy deposition from electrons of energy below 100 keV, taking into account the effective path length of the scattered electrons. Results of our model calculations agree favourably with results of Monte Carlo track structure simulations using MOCA-14 for light ions (Z = 1-8) of energy ranging from E = 0.3 to 10.0 MeV amu(-1) as well as with results of Nikjoo for a wall-less proportional counter (Z = 18).

  4. Development of methods for calculating basic features of the nuclear contribution to single event upsets under the effect of protons of moderately high energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chechenin, N. G., E-mail: chechenin@sinp.msu.ru; Chuvilskaya, T. V.; Shirokova, A. A.

    2015-10-15

    As a continuation and a development of previous studies of our group that were devoted to the investigation of nuclear reactions induced by protons of moderately high energy (between 10 and 400 MeV) in silicon, aluminum, and tungsten atoms, the results obtained by exploring nuclear reactions on atoms of copper, which is among the most important components in materials for contact pads and pathways in modern and future ultralarge-scale integration circuits, especially in three-dimensional topology, are reported in the present article. The nuclear reactions in question lead to the formation of the mass and charge spectra of recoil nuclei rangingmore » fromheavy target nuclei down to helium and hydrogen. The kineticenergy spectra of reaction products are calculated. The results of the calculations based on the procedure developed by our group are compared with the results of calculations and experiments performed by other authors.« less

  5. Mapping the conduction band edge density of states of γ-In2Se3 by diffuse reflectance spectra

    NASA Astrophysics Data System (ADS)

    Kumar, Pradeep; Vedeshwar, Agnikumar G.

    2018-03-01

    It is demonstrated that the measured diffuse reflectance spectra of γ-In2Se3 can be used to map the conduction band edge density of states through Kubelka-Munk analysis. The Kubelka-Munk function derived from the measured spectra almost mimics the calculated density of states in the vicinity of conduction band edge. The calculation of density of states was carried out using first-principles approach yielding the structural, electronic, and optical properties. The calculations were carried out implementing various functionals and only modified Tran and Blaha (TB-MBJ) results tally closest with the experimental result of band gap. The electronic and optical properties were calculated using FP-LAPW + lo approach based on the Density Functional Theory formalism implementing only TB-mBJ functional. The electron and hole effective masses have been calculated as me * = 0.25 m 0 and mh * = 1.11 m 0 , respectively. The optical properties clearly indicate the anisotropic nature of γ-In2Se3.

  6. Morphology of the winter anomaly in NmF2 and Total Electron Content

    NASA Astrophysics Data System (ADS)

    Yasyukevich, Yury; Ratovsky, Konstantin; Yasyukevich, Anna; Klimenko, Maksim; Klimenko, Vladimir; Chirik, Nikolay

    2017-04-01

    We analyzed the winter anomaly manifestation in the F2 peak electron density (NmF2) and Total Electron Content (TEC) based on the observation data and model calculation results. For the analysis we used 1998-2015 TEC Global Ionospheric Maps (GIM) and NmF2 ground-based ionosonde observation data from and COSMIC, CHAMP and GRACE radio occultation data. We used Global Self-consistent Model of the Thermosphere, Ionosphere, and Protonosphere (GSM TIP) and International Reference Ionosphere model (IRI-2012). Based on the observation data and model calculation results we constructed the maps of the winter anomaly intensity in TEC and NmF2 for the different solar and geomagnetic activity levels. The winter anomaly intensity was found to be higher in NmF2 than in TEC according to both observation and modeling. In this report we show the similarity and difference in winter anomaly as revealed in experimental data and model results.

  7. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  8. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  9. Phase gradient algorithm based on co-axis two-step phase-shifting interferometry and its application

    NASA Astrophysics Data System (ADS)

    Wang, Yawei; Zhu, Qiong; Xu, Yuanyuan; Xin, Zhiduo; Liu, Jingye

    2017-12-01

    A phase gradient method based on co-axis two-step phase-shifting interferometry, is used to reveal the detailed information of a specimen. In this method, the phase gradient distribution can only be obtained by calculating both the first-order derivative and the radial Hilbert transformation of the intensity difference between two phase-shifted interferograms. The feasibility and accuracy of this method were fully verified by the simulation results for a polystyrene sphere and a red blood cell. The empirical results demonstrated that phase gradient is sensitive to changes in the refractive index and morphology. Because phase retrieval and tedious phase unwrapping are not required, the calculation speed is faster. In addition, co-axis interferometry has high spatial resolution.

  10. Research on flow stress model and dynamic recrystallization model of X12CrMoWVNbN10-1-1 steel

    NASA Astrophysics Data System (ADS)

    Sui, Da-shan; Wang, Wei; Fu, Bo; Cui, Zhen-shan

    2013-05-01

    Plastic deformation behavior of X12CrMoWVNbN10-1-1 ferrite heat-resistant steel was studied systematically at high temperature. The stress-strain curves were measured at the temperature of 950°C-1250°C and strain rate of 0.0005s-1-0.1s-1 by Gleeble thermo-mechanical simulator. The flow stress model and dynamic recrystallization model were established based on Laasraoui two-stage model. The activation energy was calculated and the parameters were determined accordingly based on the experimental results and Sellars creep equation. The verification was performed to prove the models and it indicated the calculated results were identical to the experimental data.

  11. Theoretical and experimental research on laser-beam homogenization based on metal gauze

    NASA Astrophysics Data System (ADS)

    Liu, Libao; Zhang, Shanshan; Wang, Ling; Zhang, Yanchao; Tian, Zhaoshuo

    2018-03-01

    Method of homogenization of CO2 laser heating by means of metal gauze is researched theoretically and experimentally. Distribution of light-field of expanded beam passing through metal gauze was numerically calculated with diffractive optical theory and the conclusion is that method is effective, with comparing the results to the situation without metal gauze. Experimentally, using the 30W DC discharge laser as source and enlarging beam by concave lens, with and without metal gauze, beam intensity distributions in thermal paper were compared, meanwhile the experiments based on thermal imager were performed. The experimental result was compatible with theoretical calculation, and all these show that the homogeneity of CO2 laser heating could be enhanced by metal gauze.

  12. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  13. Trajectory And Heating Of A Hypervelocity Projectile

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.

    1992-01-01

    Technical paper presents derivation of approximate, closed-form equation for relationship between velocity of projectile and density of atmosphere. Results of calculations based on approximate equation agree well with results from numerical integrations of exact equations of motion. Comparisons of results presented in series of graphs.

  14. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  15. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  16. Measurement and simulation of thermal neutron flux distribution in the RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  17. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  18. [Martin Heidegger, beneficence, health, and evidence based medicine--contemplations regarding ethics and complementary and alternative medicine].

    PubMed

    Oberbaum, Menachem; Gropp, Cornelius

    2015-03-01

    Beneficence is considered a core principle of medical ethics. Evidence Based Medicine (EBM) is used almost synonymously with beneficence and has become the gold standard of efficiency of conventional medicine. Conventional modern medicine and EBM in particular are based on what Heidegger called calculative thinking, whereas complementary medicine (CM) is often based on contemplative thinking according to Heidegger's distinction of different thinking processes. A central issue of beneficence is the striving for health and wellbeing. EBM is little concerned directly with wellbeing, though it does claim to aim at improving quality of life by correcting pathological processes and conditions like infectious diseases, ischemic heart disease but also hypertension and hyperlipidemia. On the other hand, wellbeing is central to therapeutic efforts of CM. Scientific methods to gauge results of EBM are quantitative and based on calculative thinking, while results of treatments with CM are expressed in a qualitative way and based on meditative thinking. In order to maximize beneficence it seems important and feasible to use both approaches, by combining EBM and CM in the best interest of the individual patient.

  19. Thermodynamic description of multicomponent nickel-base superalloys containing aluminum, chromium, ruthenium and platinum: A computational thermodynamic approach coupled with experiments

    NASA Astrophysics Data System (ADS)

    Zhu, Jun

    Ru and Pt are candidate additional component for improving the high temperature properties of Ni-base superalloys. A thermodynamic description of the Ni-Al-Cr-Ru-Pt system, serving as an essential knowledge base for better alloy design and processing control, was developed in the present study by means of thermodynamic modeling coupled with experimental investigations of phase equilibria. To deal with the order/disorder transition occurring in the Ni-base superalloys, a physical sound model, Cluster/Site Approximation (CSA) was used to describe the fcc phases. The CSA offers computational advantages, without loss of accuracy, over the Cluster Variation Method (CVM) in the calculation of multicomponent phase diagrams. It has been successfully applied to fcc phases in calculating technologically important Ni-Al-Cr phase diagrams. Our effort in this study focused on the two key ternary systems: Ni-Al-Ru and Ni-Al-Pt. The CSA calculated Ni-Al-Ru ternary phase diagrams are in good agreement with the experimental results in the literature and from the current study. A thermodynamic description of quaternary Ni-Al-Cr-Ru was obtained based on the descriptions of the lower order systems and the calculated results agree with experimental data available in literature and in the current study. The Ni-Al-Pt system was thermodynamically modeled based on the limited experimental data available in the literature and obtained from the current study. With the help of the preliminary description, a number of alloy compositions were selected for further investigation. The information obtained was used to improve the current modeling. A thermodynamic description of the Ni-Al-Cr-Pt quaternary was then obtained via extrapolation from its constituent lower order systems. The thermodynamic description for Ni-base superalloy containing Al, Cr, Ru and Pt was obtained via extrapolation. It is believed to be reliable and useful to guide the alloy design and further experimental investigation.

  20. Calculating permittivity of semi-conductor fillers in composites based on simplified effective medium approximation models

    NASA Astrophysics Data System (ADS)

    Feng, Yefeng; Wu, Qin; Hu, Jianbing; Xu, Zhichao; Peng, Cheng; Xia, Zexu

    2018-03-01

    Interface induced polarization has a significant impact on permittivity of 0–3 type polymer composites with Si based semi-conducting fillers. Polarity of Si based filler, polarity of polymer matrix and grain size of filler are closely connected with induced polarization and permittivity of composites. However, unlike 2–2 type composites, the real permittivity of Si based fillers in 0–3 type composites could be not directly measured. Therefore, achieving the theoretical permittivity of fillers in 0–3 composites through effective medium approximation (EMA) models should be very necessary. In this work, the real permittivity results of Si based semi-conducting fillers in ten different 0–3 polymer composite systems were calculated by linear fitting of simplified EMA models, based on particularity of reported parameters in those composites. The results further confirmed the proposed interface induced polarization. The results further verified significant influences of filler polarity, polymer polarity and filler size on induced polarization and permittivity of composites as well. High self-consistency was gained between present modelling and prior measuring. This work might offer a facile and effective route to achieve the difficultly measured dielectric performances of discrete filler phase in some special polymer based composite systems.

  1. Dynamical scales for multi-TeV top-pair production at the LHC

    NASA Astrophysics Data System (ADS)

    Czakon, Michał; Heymes, David; Mitov, Alexander

    2017-04-01

    We calculate all major differential distributions with stable top-quarks at the LHC. The calculation covers the multi-TeV range that will be explored during LHC Run II and beyond. Our results are in the form of high-quality binned distributions. We offer predictions based on three different parton distribution function (pdf) sets. In the near future we will make our results available also in the more flexible fastNLO format that allows fast re-computation with any other pdf set. In order to be able to extend our calculation into the multi-TeV range we have had to derive a set of dynamic scales. Such scales are selected based on the principle of fastest perturbative convergence applied to the differential and inclusive cross-section. Many observations from our study are likely to be applicable and useful to other precision processes at the LHC. With scale uncertainty now under good control, pdfs arise as the leading source of uncertainty for TeV top production. Based on our findings, true precision in the boosted regime will likely only be possible after new and improved pdf sets appear. We expect that LHC top-quark data will play an important role in this process.

  2. Justification of the estimation technique for the technical condition of the tank with inadmissible imperfections in the wall shape

    NASA Astrophysics Data System (ADS)

    Chepur, Petr; Tarasenko, Alexander; Gruchenkova, Alesya

    2017-10-01

    The paper has its focus on the problem of estimating the stress-strain state of the vertical steel tanks with the inadmissible geometric imperfections in the wall shape. In the paper, the authors refer to an actual tank to demonstrate that the use of certain design schemes can lead to the raw errors and, accordingly, to the unreliable results. Obviously, these design schemes cannot be based on when choosing the real repair technologies. For that reason, authors performed the calculations of the tank removed out of service for the repair, basing on the developed finite-element model of the VST-5000 tank with a conical roof. The proposed approach was developed for the analysis of the SSS (stress-strain state) of a tank having geometric imperfections of the wall shape. Based on the work results, the following was proposed: to amend the Annex A methodology “Method for calculating the stress-strain state of the tank wall during repair by lifting the tank and replacing the wall metal structures” by inserting the requirement to compulsory consider the actual stiffness of the VST entire structure and its roof when calculating the structure stress-strain state.

  3. A novel hazard assessment method for biomass gasification stations based on extended set pair analysis

    PubMed Central

    Yan, Fang; Xu, Kaili; Li, Deshun; Cui, Zhikai

    2017-01-01

    Biomass gasification stations are facing many hazard factors, therefore, it is necessary to make hazard assessment for them. In this study, a novel hazard assessment method called extended set pair analysis (ESPA) is proposed based on set pair analysis (SPA). However, the calculation of the connection degree (CD) requires the classification of hazard grades and their corresponding thresholds using SPA for the hazard assessment. In regard to the hazard assessment using ESPA, a novel calculation algorithm of the CD is worked out when hazard grades and their corresponding thresholds are unknown. Then the CD can be converted into Euclidean distance (ED) by a simple and concise calculation, and the hazard of each sample will be ranked based on the value of ED. In this paper, six biomass gasification stations are introduced to make hazard assessment using ESPA and general set pair analysis (GSPA), respectively. By the comparison of hazard assessment results obtained from ESPA and GSPA, the availability and validity of ESPA can be proved in the hazard assessment for biomass gasification stations. Meanwhile, the reasonability of ESPA is also justified by the sensitivity analysis of hazard assessment results obtained by ESPA and GSPA. PMID:28938011

  4. Induced drag of multiplanes

    NASA Technical Reports Server (NTRS)

    Prandtl, L

    1924-01-01

    The most important part of the resistance or drag of a wing system,the induced drag, can be calculated theoretically, when the distribution of lift on the individual wings is known. The calculation is based upon the assumption that the lift on the wings is distributed along the wing in proportion to the ordinates of a semi-ellipse. Formulas and numerical tables are given for calculating the drag. In this connection, the most favorable arrangements of biplanes and triplanes are discussed and the results are further elucidated by means of numerical examples.

  5. First-principles study of length dependence of conductance in alkanedithiols

    NASA Astrophysics Data System (ADS)

    Zhou, Y. X.; Jiang, F.; Chen, H.; Note, R.; Mizuseki, H.; Kawazoe, Y.

    2008-01-01

    Electronic transport properties of alkanedithiols are calculated by a first-principles method based on density functional theory and nonequilibrium Green's function formalism. At small bias, the I-V characteristics are linear and the resistances conform to the Magoga's exponential law. The calculated length-dependent decay constant γ which reflects the effect of internal molecular structure is in accordance with most experiments quantitatively. Also, the calculated effective contact resistance R0 is in good agreement with the results of repeatedly measuring molecule-electrode junctions [B. Xu and N. Tao, Science 301, 1221 (2003)].

  6. Calculation of NMR chemical shifts. 7. Gauge-invariant INDO method

    NASA Astrophysics Data System (ADS)

    Fukui, H.; Miura, K.; Hirai, A.

    A gauge-invariant INDO method based on the coupled Hartree-Fuck perturbation theory is presented and applied to the calculation of 1H and 13C chemical shifts of hydrocarbons including ring compounds. Invariance of the diamagnetic and paramagnetic shieldings with respect to displacement of the coordinate origin is discussed. Comparison between calculated and experimental results exhibits fairly good agreement, provided that the INDO parameters of Ellis et al. (J. Am. Chem. Soc.94, 4069 (1972)) are used with the inclusion of all multicenter one-electron integrals.

  7. Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow

    NASA Astrophysics Data System (ADS)

    Kemerink, G. J.; Pleiter, F.

    1986-08-01

    The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.

  8. Finite element validation of stress intensity factor calculation models for thru-thickness and thumb-nail cracks in double edge notch specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beres, W.; Koul, A.K.

    1994-09-01

    Stress intensity factors for thru-thickness and thumb-nail cracks in the double edge notch specimens, containing two different notch radius (R) to specimen width (W) ratios (R/W = 1/8 and 1/16), are calculated through finite element analysis. The finite element results are compared with predictions based on existing empirical models for SIF calculations. The effects of a change in R/W ratio on SIF of thru-thickness and thumb-nail cracks are also discussed. 34 refs.

  9. Accuracy Test of the OPLS-AA Force Field for Calculating Free Energies of Mixing and Comparison with PAC-MAC

    PubMed Central

    2017-01-01

    We have calculated the excess free energy of mixing of 1053 binary mixtures with the OPLS-AA force field using two different methods: thermodynamic integration (TI) of molecular dynamics simulations and the Pair Configuration to Molecular Activity Coefficient (PAC-MAC) method. PAC-MAC is a force field based quasi-chemical method for predicting miscibility properties of various binary mixtures. The TI calculations yield a root mean squared error (RMSE) compared to experimental data of 0.132 kBT (0.37 kJ/mol). PAC-MAC shows a RMSE of 0.151 kBT with a calculation speed being potentially 1.0 × 104 times greater than TI. OPLS-AA force field parameters are optimized using PAC-MAC based on vapor–liquid equilibrium data, instead of enthalpies of vaporization or densities. The RMSE of PAC-MAC is reduced to 0.099 kBT by optimizing 50 force field parameters. The resulting OPLS-PM force field has a comparable accuracy as the OPLS-AA force field in the calculation of mixing free energies using TI. PMID:28418655

  10. AtomicChargeCalculator: interactive web-based calculation of atomic charges in large biomolecular complexes and drug-like molecules.

    PubMed

    Ionescu, Crina-Maria; Sehnal, David; Falginella, Francesco L; Pant, Purbaj; Pravda, Lukáš; Bouchal, Tomáš; Svobodová Vařeková, Radka; Geidl, Stanislav; Koča, Jaroslav

    2015-01-01

    Partial atomic charges are a well-established concept, useful in understanding and modeling the chemical behavior of molecules, from simple compounds, to large biomolecular complexes with many reactive sites. This paper introduces AtomicChargeCalculator (ACC), a web-based application for the calculation and analysis of atomic charges which respond to changes in molecular conformation and chemical environment. ACC relies on an empirical method to rapidly compute atomic charges with accuracy comparable to quantum mechanical approaches. Due to its efficient implementation, ACC can handle any type of molecular system, regardless of size and chemical complexity, from drug-like molecules to biomacromolecular complexes with hundreds of thousands of atoms. ACC writes out atomic charges into common molecular structure files, and offers interactive facilities for statistical analysis and comparison of the results, in both tabular and graphical form. Due to high customizability and speed, easy streamlining and the unified platform for calculation and analysis, ACC caters to all fields of life sciences, from drug design to nanocarriers. ACC is freely available via the Internet at http://ncbr.muni.cz/ACC.

  11. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  12. Relativistic effects on the NMR parameters of Si, Ge, Sn, and Pb alkynyl compounds: Scalar versus spin-orbit effects

    NASA Astrophysics Data System (ADS)

    Demissie, Taye B.

    2017-11-01

    The NMR chemical shifts and indirect spin-spin coupling constants of 12 molecules containing 29Si, 73Ge, 119Sn, and 207Pb [X(CCMe)4, Me2X(CCMe)2, and Me3XCCH] are presented. The results are obtained from non-relativistic as well as two- and four-component relativistic density functional theory (DFT) calculations. The scalar and spin-orbit relativistic contributions as well as the total relativistic corrections are determined. The main relativistic effect in these molecules is not due to spin-orbit coupling but rather to the scalar relativistic contraction of the s-shells. The correlation between the calculated and experimental indirect spin-spin coupling constants showed that the four-component relativistic density functional theory (DFT) approach using the Perdew's hybrid scheme exchange-correlation functional (PBE0; using the Perdew-Burke-Ernzerhof exchange and correlation functionals) gives results in good agreement with experimental values. The indirect spin-spin coupling constants calculated using the spin-orbit zeroth order regular approximation together with the hybrid PBE0 functional and the specially designed J-coupling (JCPL) basis sets are in good agreement with the results obtained from the four-component relativistic calculations. For the coupling constants involving the heavy atoms, the relativistic corrections are of the same order of magnitude compared to the non-relativistically calculated results. Based on the comparisons of the calculated results with available experimental values, the best results for all the chemical shifts and non-existing indirect spin-spin coupling constants for all the molecules are reported, hoping that these accurate results will be used to benchmark future DFT calculations. The present study also demonstrates that the four-component relativistic DFT method has reached a level of maturity that makes it a convenient and accurate tool to calculate indirect spin-spin coupling constants of "large" molecular systems involving heavy atoms.

  13. Numerical study of vortex rope during load rejection of a prototype pump-turbine

    NASA Astrophysics Data System (ADS)

    Liu, J. T.; Liu, S. H.; Sun, Y. K.; Wu, Y. L.; Wang, L. Q.

    2012-11-01

    A transient process of load rejection of a prototype pump-turbine was studied by three dimensional, unsteady simulations, as well as steady calculations.Dynamic mesh (DM) method and remeshing method were used to simulate the rotation of guide vanes and runner. The rotational speed of the runner was predicted by fluid couplingmethod. Both the transient calculation and steady calculation were performed based on turbulence model. Results show that steady calculation results have large error in the prediction of the external characteristics of the transient process. The runaway speed can reach 1.15 times the initial rotational speed during the transient process. The vortex rope occurs before the pump-turbine runs at zero moment point. Vortex rope has the same rotating direction with the runner. The vortex rope is separated into two parts as the flow rate decreases to 0. Pressure level decreases during the whole transient process.The transient simulation result were also compared and verified by experimental results. This computational method could be used in the fault diagnosis of transient operation, as well as the optimization of a transient process.

  14. Vicinage effect in the energy loss of H2 dimers: Experiment and calculations based on time-dependent density-functional theory

    NASA Astrophysics Data System (ADS)

    Koval, N. E.; Borisov, A. G.; Rosa, L. F. S.; Stori, E. M.; Dias, J. F.; Grande, P. L.; Sánchez-Portal, D.; Muiño, R. Díez

    2017-06-01

    We present a combined theoretical and experimental study of the energy loss of H2+ molecular ions interacting with thin oxide and carbon films. As a result of quantum mechanical interference of the target electrons, the energy loss of a molecular projectile differs from the sum of the energy losses of individual atomic projectiles. This difference is known as the vicinage effect. Calculations based on the time-dependent density functional theory allow the first-principles description of the dynamics of target excitations produced by the correlated motion of the nucleons forming the molecule. We investigate in detail the dependence of the vicinage effect on the speed and charge state of the projectile and find an excellent agreement between calculated and measured data.

  15. Purcell effect in triangular plasmonic nanopatch antennas with three-layer colloidal quantum dots

    NASA Astrophysics Data System (ADS)

    Eliseev, S. P.; Kurochkin, N. S.; Vergeles, S. S.; Sychev, V. V.; Chubich, D. A.; Argyrakis, P.; Kolymagin, D. A.; Vitukhnovskii, A. G.

    2017-05-01

    A model describing a plasmonic nanopatch antenna based on triangular silver nanoprisms and multilayer cadmium chalcogenide quantum dots is introduced. Electromagnetic-field distributions in nanopatch antennas with different orientations of the quantum-dot dipoles are calculated for the first time with the finite element method for numerical electrodynamics simulations. The energy flux through the surface of an emitting quantum dot is calculated for the configurations with the dot in free space, on an aluminum substrate, and in a nanopatch antenna. It is shown that the radiative part of the Purcell factor is as large as 1.7 × 102 The calculated photoluminescence lifetimes of a CdSe/CdS/ZnS colloidal quantum dot in a nanopatch antenna based on a silver nanoprism agree well with the experimental results.

  16. Load Carrying Capacity of Metal Dowel Type Connections of Timber Structures

    NASA Astrophysics Data System (ADS)

    Gocál, Jozef

    2014-12-01

    This paper deals with the load-carrying capacity calculation of laterally loaded metal dowel type connections according to Eurocode 5. It is based on analytically derived, relatively complicated mathematical relationships, and thus it can be quite laborious for practical use. The aim is to propose a possible simplification of the calculation. Due to quite a great variability of fasteners' types and the connection arrangements, the attention is paid to the most commonly used nailed connections. There was performed quite an extensive parametric study focused on the calculation of load-carrying capacity of the simple shear and double shear plane nail connections, joining two or three timber parts of softwood or hardwood. Based on the study results, in conclusion there are presented simplifying recommendations for practical design.

  17. Sources of Individual Differences in Emerging Competence With Numeration Understanding Versus Multidigit Calculation Skill

    PubMed Central

    Fuchs, Lynn S.; Geary, David C.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.

    2014-01-01

    This study investigated contributions of general cognitive abilities and foundational mathematical competencies to numeration understanding (i.e., base-10 structure) versus multidigit calculation skill. Children (n = 394, M = 6.5 years) were assessed on general cognitive abilities and foundational numerical competencies at start of 1st grade; on the same numerical competencies, multidigit calculation skill, and numeration understanding at end of 2nd grade; and on multidigit calculation skill and numeration understanding at end of 3rd grade. Path-analytic mediation analysis revealed that general cognitive predictors exerted more direct and more substantial effects on numeration understanding than on multidigit calculations. Foundational mathematics competencies contributed to both outcomes, but largely via 2nd-grade mathematics achievement, and results suggest a mutually supportive role between numeration understanding and multidigit calculations. PMID:25284885

  18. Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, Y.

    2017-12-01

    Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.

  19. Evaluation of students' knowledge about paediatric dosage calculations.

    PubMed

    Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak

    2018-01-01

    Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it is also seen that students lack maths knowledge in respect of four operations and calculating safe dose range. Relations among the medications suggest that a student wrongly calculating a dosage may also make other errors. Additional courses, exercises or utilisation of different teaching techniques may be suggested to eliminate the deficiencies in terms of basic maths knowledge, problem solving skills and correct dosage calculation of the students. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Abacus Training Modulates the Neural Correlates of Exact and Approximate Calculations in Chinese Children: An fMRI Study

    PubMed Central

    Du, Fenglei; Chen, Feiyan; Li, Yongxin; Hu, Yuzheng; Tian, Mei; Zhang, Hong

    2013-01-01

    Exact (EX) and approximate (AP) calculations rely on distinct neural circuits. However, the training effect on the neural correlates of EX and AP calculations is largely unknown, especially for the AP calculation. Abacus-based mental calculation (AMC) is a particular arithmetic skill that can be acquired by long-term abacus training. The present study investigated whether and how the abacus training modulates the neural correlates of EX and AP calculations by functional magnetic resonance imaging (fMRI). Neural activations were measured in 20 abacus-trained and 19 nontrained Chinese children during AP and EX calculation tasks. Our results demonstrated that: (1) in nontrained children, similar neural regions were activated in both tasks, while the size of activated regions was larger in AP than those in the EX; (2) in abacus-trained children, no significant difference was found between these two tasks; (3) more visuospatial areas were activated in abacus-trained children under the EX task compared to the nontrained. These results suggested that more visuospatial strategies were used by the nontrained children in the AP task compared to the EX; abacus-trained children adopted a similar strategy in both tasks; after long-term abacus training, children were more inclined to apply a visuospatial strategy during processing EX calculations. PMID:24288683

  1. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  2. Mapping Base Modifications in DNA by Transverse-Current Sequencing

    NASA Astrophysics Data System (ADS)

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2018-02-01

    Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.

  3. On the predictability of outliers in ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Siegert, S.; Bröcker, J.; Kantz, H.

    2012-03-01

    In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.

  4. Development of computer-aided design system of elastic sensitive elements of automatic metering devices

    NASA Astrophysics Data System (ADS)

    Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.

    2018-05-01

    The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.

  5. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  6. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  7. DFT analysis on the molecular structure, vibrational and electronic spectra of 2-(cyclohexylamino)ethanesulfonic acid

    NASA Astrophysics Data System (ADS)

    Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.

    2015-02-01

    The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule.

  8. Blood-Banking Techniques for Plateletpheresis in Swine

    DTIC Science & Technology

    2014-05-01

    automatically calculates the blood volume for that patient according to an internal formula . However, to circumvent the human-based algorithm, for a 60-kg...blood volume (µL) in the percentage recovery formula given earlier. The maximal recovery percent- age was calculated by setting the 3-min results to 100...Control Resuscitation Department, the Veterinary Support Department, and the Laboratory Support Department for their expert technical assistance

  9. Glare effect for three types of street lamps based on White LEDs

    NASA Astrophysics Data System (ADS)

    Sun, Ching-Cherng; Jiang, Chong-Jhih; Chen, Yi-Chun; Yang, Tsung-Hsun

    2014-05-01

    This study is aimed to assess the glare effect from LED-based street lamps with three general optical designs, which are cluster LEDs with a single lens, a LED array accompany with a lens array, and a tilted LED array, respectively. Observation conditions were simulated based on various locations and viewing axes. Equivalent luminance calculations were used to reveal the glare levels of the three designs. The age effect for the calculated equivalent luminance was also examined for human eyes of people at the age of 40 or 60. The results demonstrate that among the three design types, a LED array accompany with a lens array causes relatively smaller glare for most viewing conditions.

  10. Specific interactions between mycobacterial FtsZ protein and curcumin derivatives: Molecular docking and ab initio molecular simulations

    NASA Astrophysics Data System (ADS)

    Fujimori, Mitsuki; Sogawa, Haruki; Ota, Shintaro; Karpov, Pavel; Shulga, Sergey; Blume, Yaroslav; Kurita, Noriyuki

    2018-01-01

    Filamentous temperature-sensitive Z (FtsZ) protein plays essential role in bacteria cell division, and its inhibition prevents Mycobacteria reproduction. Here we adopted curcumin derivatives as candidates of novel inhibitors and investigated their specific interactions with FtsZ, using ab initio molecular simulations based on protein-ligand docking, classical molecular mechanics and ab initio fragment molecular orbital (FMO) calculations. Based on FMO calculations, we specified the most preferable site of curcumin binding to FtsZ and highlighted the key amino acid residues for curcumin binding at an electronic level. The result will be useful for proposing novel inhibitors against FtsZ based on curcumin derivatives.

  11. A VaR Algorithm for Warrants Portfolio

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong

    Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.

  12. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  13. Vibrational properties, phonon spectrum and related thermal parameters of β-octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine: a theoretical study.

    PubMed

    Qian, Wen; Zhang, Weibin; Zong, Hehou; Gao, Guofang; Zhou, Yang; Zhang, Chaoyang

    2016-01-01

    The vibrational spectrum, phonon dispersion curve, and phonon density of states (DOS) of β-octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (β-HMX) crystal were obtained by molecular simulation and calculations. As results, it was found that the peaks at low frequency (0-2.5 THz) are comparable with the experimental Terahertz absorption and the molecular vibrational modes are in agreement with previous reports. Thermodynamic properties including Gibbs free energy, enthalpy, and heat capacity as functions of temperature were obtained based on the calculated phonon spectrum. The heat capacity at normal temperature was calculated using linear fitting method, with a result consistent with experiments. Graphical Abstract Phonon spectrum and heat capacity of β-octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine from DFT calculation.

  14. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  15. Sixth-order wave aberration theory of ultrawide-angle optical systems.

    PubMed

    Lu, Lijun; Cao, Yiqing

    2017-10-20

    In this paper, we develop sixth-order wave aberration theory of ultrawide-angle optical systems like fisheye lenses. Based on the concept and approach to develop wave aberration theory of plane-symmetric optical systems, we first derive the sixth-order intrinsic wave aberrations and the fifth-order ray aberrations; second, we present a method to calculate the pupil aberration of such kind of optical systems to develop the extrinsic aberrations; third, the relation of aperture-ray coordinates between adjacent optical surfaces is fitted with the second-order polynomial to improve the calculation accuracy of the wave aberrations of a fisheye lens with a large acceptance aperture. Finally, the resultant aberration expressions are applied to calculate the aberrations of two design examples of fisheye lenses; the calculation results are compared with the ray-tracing ones with Zemax software to validate the aberration expressions.

  16. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  17. Numerical investigation of flow structure and pressure pulsation in the Francis-99 turbine during startup

    NASA Astrophysics Data System (ADS)

    Minakov, A.; Sentyabov, A.; Platonov, D.

    2017-01-01

    We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at startup regimes. Numerical technique for calculating of low frequency pressure pulsations in a water turbine is based on the use of DES (k-ω Shear Stress Transport) turbulence model and the approach of “frozen rotor”. The structure of the flow behind the runner of turbine was analysed. Shows the effect of flow structure on the frequency and intensity of non-stationary processes in the flow path. Two version of the inlet boundary conditions were considered. The first one corresponded measured time dependence of the discharge. Comparison of the calculation results with the experimental data shows the considerable delay of the discharge in this calculation. Second version corresponded linear approximation of time dependence of the discharge. This calculation shows good agreement with experimental results.

  18. First-principles investigations on structural, elastic, electronic properties and Debye temperature of orthorhombic Ni3Ta under pressure

    NASA Astrophysics Data System (ADS)

    Li, Pan; Zhang, Jianxin; Ma, Shiyu; Jin, Huixin; Zhang, Youjian; Zhang, Wenyang

    2018-06-01

    The structural, elastic, electronic properties and Debye temperature of Ni3Ta under different pressures are investigated using the first-principles method based on density functional theory. Our calculated equilibrium lattice parameters at 0 GPa well agree with the experimental and previous theoretical results. The calculated negative formation enthalpies and elastic constants both indicate that Ni3Ta is stable under different pressures. The bulk modulus B, shear modulus G, Young's modulus E and Poisson's ratio ν are calculated by the Voigt-Reuss-Hill method. The bigger ratio of B/G indicates Ni3Ta is ductile and the pressure can improve the ductility of Ni3Ta. In addition, the results of density of states and the charge density difference show that the stability of Ni3Ta is improved by the increasing pressure. The Debye temperature ΘD calculated from elastic modulus increases along with the pressure.

  19. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  20. Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin

    1993-01-01

    The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.

  1. On the radiated EMI current extraction of dc transmission line based on corona current statistical measurements

    NASA Astrophysics Data System (ADS)

    Yi, Yong; Chen, Zhengying; Wang, Liming

    2018-05-01

    Corona-originated discharge of DC transmission lines is the main reason for the radiated electromagnetic interference (EMI) field in the vicinity of transmission lines. A joint time-frequency analysis technique was proposed to extract the radiated EMI current (excitation current) of DC corona based on corona current statistical measurements. A reduced-scale experimental platform was setup to measure the statistical distributions of current waveform parameters of aluminum conductor steel reinforced. Based on the measured results, the peak value, root-mean-square value and average value with 9 kHz and 200 Hz band-with of 0.5 MHz radiated EMI current were calculated by the technique proposed and validated with conventional excitation function method. Radio interference (RI) was calculated based on the radiated EMI current and a wire-to-plate platform was built for the validity of the RI computation results. The reason for the certain deviation between the computations and measurements was detailed analyzed.

  2. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.

  3. New evaluation of thermal neutron scattering libraries for light and heavy water

    NASA Astrophysics Data System (ADS)

    Marquez Damian, Jose Ignacio; Granada, Jose Rolando; Cantargi, Florencia; Roubtsov, Danila

    2017-09-01

    In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates), and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem). To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of 65 international benchmark cases calculated with the Monte Carlo code MCNP5, in comparison with the existing library based on the ENDF/B-VII.0 evaluation.

  4. MO-AB-BRA-03: Calorimetry-Based Absorbed Dose to Water Measurements Using Interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores-Martinez, E; Malin, M; DeWerd, L

    2015-06-15

    Purpose: Interferometry-based calorimetry is a novel technique to measure radiation-induced temperature changes allowing the measurement of absorbed dose to water (ADW). There are no mechanical components in the field. This technique also has the possibility of obtaining 2D dose distributions. The goal of this investigation is to calorimetrically-measure doses between 2.5 and 5 Gy over a single projection in a photon beam using interferometry and compare the results with doses calculated using the TG-51 linac calibration. Methods: ADW was determined by measuring radiation-induced phase shifts (PSs) of light passing through water irradiated with a 6 MV photon beam. A 9×9×9more » cm{sup 3} glass phantom filled with water and placed in an arm of a Michelson interferometer was irradiated with 300, 400, 500 and 600 monitor units. The whole system was thermally insulated to achieve sufficient passive temperature control. The depth of measurement was 4.5 cm with a field size of 7×7 cm{sup 2}. The intensity of the fringe pattern was monitored with a photodiode and used to calculate the time-dependent PS curve. Data was acquired 60 s before and after the irradiation. The radiation-induced PS was calculated by taking the difference in the pre- and post-irradiation drifts extrapolated to the midpoint of the irradiation. Results were compared to computed doses. Results: Average comparison of calculated ADW values with interferometry-measured values showed an agreement to within 9.5%. k=1 uncertainties were 4.3% for calculations and 14.7% for measurements. The dominant source of uncertainty for the measurements was a temperature drift of about 30 µK/s caused by heat conduction from the interferometer’s surroundings. Conclusion: This work presented the first absolute ADW measurements using interferometry in the dose range of linac-based radiotherapy. Future work to improve measurements’ reproducibility includes the implementation of active thermal control techniques.« less

  5. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    PubMed

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  6. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses

    PubMed Central

    Desai, Trunil S.

    2018-01-01

    13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347

  7. Electrostatic frequency maps for amide-I mode of β-peptide: Comparison of molecular mechanics force field and DFT calculations

    NASA Astrophysics Data System (ADS)

    Cai, Kaicong; Zheng, Xuan; Du, Fenfen

    2017-08-01

    The spectroscopy of amide-I vibrations has been widely utilized for the understanding of dynamical structure of polypeptides. For the modeling of amide-I spectra, two frequency maps were built for β-peptide analogue (N-ethylpropionamide, NEPA) in a number of solvents within different schemes (molecular mechanics force field based, GM map; DFT calculation based, GD map), respectively. The electrostatic potentials on the amide unit that originated from solvents and peptide backbone were correlated to the amide-I frequency shift from gas phase to solution phase during map parameterization. GM map is easier to construct with negligible computational cost since the frequency calculations for the samples are purely based on force field, while GD map utilizes sophisticated DFT calculations on the representative solute-solvent clusters and brings insight into the electronic structures of solvated NEPA and its chemical environments. The results show that the maps' predicted amide-I frequencies present solvation environmental sensitivities and exhibit their specific characters with respect to the map protocols, and the obtained vibrational parameters are in satisfactory agreement with experimental amide-I spectra of NEPA in solution phase. Although different theoretical schemes based maps have their advantages and disadvantages, the present maps show their potentials in interpreting the amide-I spectra for β-peptides, respectively.

  8. First principle study of structural, elastic and electronic properties of APt3 (A=Mg, Sc, Y and Zr)

    NASA Astrophysics Data System (ADS)

    Benamer, A.; Roumili, A.; Medkour, Y.; Charifi, Z.

    2018-02-01

    We report results obtained from first principle calculations on APt3 compounds with A=Mg, Sc, Y and Zr. Our results of the lattice parameter a are in good agreement with experimental data, with deviations less than 0.8%. Single crystal elastic constants are calculated, then polycrystalline elastic moduli (bulk, shear and Young moduli, Poisson ration, anisotropy factor) are presented. Based on Debye model, Debye temperature ϴD is calculated from the sound velocities Vl, Vt and Vm. Band structure results show that the studied compounds are electrical conductors, the conduction mechanism is assured by Pt-d electrons. Different hybridisation states are observed between Pt-d and A-d orbitals. The study of the charge density distribution and the population analysis shows the coexistence of ionic, covalent and metallic bonds.

  9. Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2009-09-01

    Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.

  10. Ionic and electronic transport properties in dense plasmas by orbital-free density functional theory

    DOE PAGES

    Sjostrom, Travis; Daligault, Jérôme

    2015-12-09

    We validate the application of our recent orbital-free density functional theory (DFT) approach, [Phys. Rev. Lett. 113, 155006 (2014)], for the calculation of ionic and electronic transport properties of dense plasmas. To this end, we calculate the self-diffusion coefficient, the viscosity coefficient, the electrical and thermal conductivities, and the reflectivity coefficient of hydrogen and aluminum plasmas. Very good agreement is found with orbital-based Kohn-Sham DFT calculations at lower temperatures. Because the computational costs of the method do not increase with temperature, we can produce results at much higher temperatures than is accessible by the Kohn-Sham method. Our results for warmmore » dense aluminum at solid density are inconsistent with the recent experimental results reported by Sperling et al. [Phys. Rev. Lett. 115, 115001 (2015)].« less

  11. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  12. Applications of potential theory computations to transonic aeroelasticity

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1986-01-01

    Unsteady aerodynamic and aeroelastic stability calculations based upon transonic small disturbance (TSD) potential theory are presented. Results from the two-dimensional XTRAN2L code and the three-dimensional XTRAN3S code are compared with experiment to demonstrate the ability of TSD codes to treat transonic effects. The necessity of nonisentropic corrections to transonic potential theory is demonstrated. Dynamic computational effects resulting from the choice of grid and boundary conditions are illustrated. Unsteady airloads for a number of parameter variations including airfoil shape and thickness, Mach number, frequency, and amplitude are given. Finally, samples of transonic aeroelastic calculations are given. A key observation is the extent to which unsteady transonic airloads calculated by inviscid potential theory may be treated in a locally linear manner.

  13. Assessment of formulas for calculating critical concentration by the agar diffusion method.

    PubMed Central

    Drugeon, H B; Juvin, M E; Caillon, J; Courtieu, A L

    1987-01-01

    The critical concentration of antibiotic was calculated by using the agar diffusion method with disks containing different charges of antibiotic. It is currently possible to use different calculation formulas (based on Fick's law) devised by Cooper and Woodman (the best known) and by Vesterdal. The results obtained with the formulas were compared with the MIC results (obtained by the agar dilution method). A total of 91 strains and two cephalosporins (cefotaxime and ceftriaxone) were studied. The formula of Cooper and Woodman led to critical concentrations that were higher than the MIC, but concentrations obtained with the Vesterdal formula were closer to the MIC. The critical concentration was independent of method parameters (dilution, for example). PMID:3619419

  14. Validation of total skin electron irradiation (TSEI) technique dosimetry data by Monte Carlo simulation

    PubMed Central

    Borzov, Egor; Daniel, Shahar; Bar‐Deroma, Raquel

    2016-01-01

    Total skin electron irradiation (TSEI) is a complex technique which requires many nonstandard measurements and dosimetric procedures. The purpose of this work was to validate measured dosimetry data by Monte Carlo (MC) simulations using EGSnrc‐based codes (BEAMnrc and DOSXYZnrc). Our MC simulations consisted of two major steps. In the first step, the incident electron beam parameters (energy spectrum, FWHM, mean angular spread) were adjusted to match the measured data (PDD and profile) at SSD=100 cm for an open field. In the second step, these parameters were used to calculate dose distributions at the treatment distance of 400 cm. MC simulations of dose distributions from single and dual fields at the treatment distance were performed in a water phantom. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. MC calculations were compared to the available set of measurements used in clinical practice. For one direct field, MC calculated PDDs agreed within 3%/1 mm with the measurements, and lateral profiles agreed within 3% with the measured data. For the OF, the measured and calculated results were within 2% agreement. The optimal angle of 17° was confirmed for the dual field setup. Dose distribution from the full treatment with six dual fields was simulated in a CT‐based anthropomorphic phantom. The MC‐calculated multiplication factor (B12‐factor), which relates the skin dose for the whole treatment to the dose from one calibration field, for setups with and without degrader was 2.9 and 2.8, respectively. The measured B12‐factor was 2.8 for both setups. The difference between calculated and measured values was within 3.5%. It was found that a degrader provides more homogeneous dose distribution. The measured X‐ray contamination for the full treatment was 0.4%; this is compared to the 0.5% X‐ray contamination obtained with the MC calculation. Feasibility of MC simulation in an anthropomorphic phantom for a full TSEI treatment was proved and is reported for the first time in the literature. The results of our MC calculations were found to be in general agreement with the measurements, providing a promising tool for further studies of dose distribution calculations in TSEI. PACS number(s): 87.10. Rt, 87.55.K, 87.55.ne PMID:27455502

  15. Thermally activated switching at long time scales in exchange-coupled magnetic grains

    NASA Astrophysics Data System (ADS)

    Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.; Fal, T. J.

    2015-10-01

    Rate coefficients of the Arrhenius-Néel form are calculated for thermally activated magnetic moment reversal for dual layer exchange-coupled composite (ECC) media based on the Langer formalism and are applied to study the sweep rate dependence of M H hysteresis loops as a function of the exchange coupling I between the layers. The individual grains are modeled as two exchange-coupled Stoner-Wohlfarth particles from which the minimum energy paths connecting the minimum energy states are calculated using a variant of the string method and the energy barriers and attempt frequencies calculated as a function of the applied field. The resultant rate equations describing the evolution of an ensemble of noninteracting ECC grains are then integrated numerically in an applied field with constant sweep rate R =-d H /d t and the magnetization calculated as a function of the applied field H . M H hysteresis loops are presented for a range of values I for sweep rates 105Oe /s ≤R ≤1010Oe /s and a figure of merit that quantifies the advantages of ECC media is proposed. M H hysteresis loops are also calculated based on the stochastic Landau-Lifshitz-Gilbert equations for 108Oe /s ≤R ≤1010Oe /s and are shown to be in good agreement with those obtained from the direct integration of rate equations. The results are also used to examine the accuracy of certain approximate models that reduce the complexity associated with the Langer-based formalism and which provide some useful insight into the reversal process and its dependence on the coupling strength and sweep rate. Of particular interest is the clustering of minimum energy states that are separated by relatively low-energy barriers into "metastates." It is shown that while approximating the reversal process in terms of "metastates" results in little loss of accuracy, it can reduce the run time of a kinetic Monte Carlo (KMC) simulation of the magnetic decay of an ensemble of dual layer ECC media by 2 -3 orders of magnitude. The essentially exact results presented in this work for two coupled grains are analogous to the Stoner-Wohlfarth model of a single grain and serve as an important precursor to KMC-based simulation studies on systems of interacting dual layer ECC media.

  16. Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.

    2016-01-01

    The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.

  17. Virial Coefficients for the Liquid Argon

    NASA Astrophysics Data System (ADS)

    Korth, Micheal; Kim, Saesun

    2014-03-01

    We begin with a geometric model of hard colliding spheres and calculate probability densities in an iterative sequence of calculations that lead to the pair correlation function. The model is based on a kinetic theory approach developed by Shinomoto, to which we added an interatomic potential for argon based on the model from Aziz. From values of the pair correlation function at various values of density, we were able to find viral coefficients of liquid argon. The low order coefficients are in good agreement with theoretical hard sphere coefficients, but appropriate data for argon to which these results might be compared is difficult to find.

  18. Solvent dependent triphenylamine based D-(pi-A)n type dye molecules and optical properties.

    PubMed

    Li, Xiaochuan; Son, Young-A; Kim, Young-Sung; Kim, Sung-Hoon; Kun, Jun; Shin, Jong-Il

    2012-02-01

    D-(pi-A)n type dyes of triphenylamine derivatives were synthesized and their absorption and luminescence in different solvents were examined to investigate solvent dependent properties observed for their emissions in solvents with different dielectric constants. The emission wavelengths showed a dramatic blue shift with increasing solvent polarity. The results of molecular orbital calculations by computer simulation, based on Material Studio suite of programs, were found to reasonably account for the spectral properties. Relative levels of HOMO and LUMO were measured and calculated and all derivatives exhibited strong solid fluorescence with distinctively different FWHMs.

  19. Numerical modelling of a fibre reflection filter based on a metal–dielectric diffraction structure with an increased optical damage threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terentyev, V S; Simonov, V A

    2016-02-28

    Numerical modelling demonstrates the possibility of fabricating an all-fibre multibeam two-mirror reflection interferometer based on a metal–dielectric diffraction structure in its front mirror. The calculations were performed using eigenmodes of a double-clad single-mode fibre. The calculation results indicate that, using a metallic layer in the structure of the front mirror of such an interferometer and a diffraction effect, one can reduce the Ohmic loss by a factor of several tens in comparison with a continuous thin metallic film. (laser crystals and braggg ratings)

  20. Linear Discriminant Analysis for the in Silico Discovery of Mechanism-Based Reversible Covalent Inhibitors of a Serine Protease: Application of Hydration Thermodynamics Analysis and Semi-empirical Molecular Orbital Calculation.

    PubMed

    Masuda, Yosuke; Yoshida, Tomoki; Yamaotsu, Noriyuki; Hirono, Shuichi

    2018-01-01

    We recently reported that the Gibbs free energy of hydrolytic water molecules (ΔG wat ) in acyl-trypsin intermediates calculated by hydration thermodynamics analysis could be a useful metric for estimating the catalytic rate constants (k cat ) of mechanism-based reversible covalent inhibitors. For thorough evaluation, the proposed method was tested with an increased number of covalent ligands that have no corresponding crystal structures. After modeling acyl-trypsin intermediate structures using flexible molecular superposition, ΔG wat values were calculated according to the proposed method. The orbital energies of antibonding π* molecular orbitals (MOs) of carbonyl C=O in covalently modified catalytic serine (E orb ) were also calculated by semi-empirical MO calculations. Then, linear discriminant analysis (LDA) was performed to build a model that can discriminate covalent inhibitor candidates from substrate-like ligands using ΔG wat and E orb . The model was built using a training set (10 compounds) and then validated by a test set (4 compounds). As a result, the training set and test set ligands were perfectly discriminated by the model. Hydrolysis was slower when (1) the hydrolytic water molecule has lower ΔG wat ; (2) the covalent ligand presents higher E orb (higher reaction barrier). Results also showed that the entropic term of hydrolytic water molecule (-TΔS wat ) could be used for estimating k cat and for covalent inhibitor optimization; when the rotational freedom of the hydrolytic water molecule is limited, the chance for favorable interaction with the electrophilic acyl group would also be limited. The method proposed in this study would be useful for screening and optimizing the mechanism-based reversible covalent inhibitors.

  1. Theoretical rate constants of super-exchange hole transfer and thermally induced hopping in DNA.

    PubMed

    Shimazaki, Tomomi; Asai, Yoshihiro; Yamashita, Koichi

    2005-01-27

    Recently, the electronic properties of DNA have been extensively studied, because its conductivity is important not only to the study of fundamental biological problems, but also in the development of molecular-sized electronics and biosensors. We have studied theoretically the reorganization energies, the activation energies, the electronic coupling matrix elements, and the rate constants of hole transfer in B-form double-helix DNA in water. To accommodate the effects of DNA nuclear motions, a subset of reaction coordinates for hole transfer was extracted from classical molecular dynamics (MD) trajectories of DNA in water and then used for ab initio quantum chemical calculations of electron coupling constants based on the generalized Mulliken-Hush model. A molecular mechanics (MM) method was used to determine the nuclear Franck-Condon factor. The rate constants for two types of mechanisms of hole transfer-the thermally induced hopping (TIH) and the super-exchange mechanisms-were determined based on Marcus theory. We found that the calculated matrix elements are strongly dependent on the conformations of the nucleobase pairs of hole-transferable DNA and extend over a wide range of values for the "rise" base-step parameter but cluster around a particular value for the "twist" parameter. The calculated activation energies are in good agreement with experimental results. Whereas the rate constant for the TIH mechanism is not dependent on the number of A-T nucleobase pairs that act as a bridge, the rate constant for the super-exchange process rapidly decreases when the length of the bridge increases. These characteristic trends in the calculated rate constants effectively reproduce those in the experimental data of Giese et al. [Nature 2001, 412, 318]. The calculated rate constants were also compared with the experimental results of Lewis et al. [Nature 2000, 406, 51].

  2. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  3. Viscous flow calculations for the AGARD standard configuration airfoils with experimental comparisons

    NASA Technical Reports Server (NTRS)

    Howlett, James T.

    1989-01-01

    Recent experience in calculating unsteady transonic flow by means of viscous-inviscid interactions with the XTRAN2L computer code is examined. The boundary layer method for attached flows is based upon the work of Rizzetta. The nonisentropic corrections of Fuglsang and Williams are also incorporated along with the viscous interaction for some cases and initial results are presented. For unsteady flows, the inverse boundary layer equations developed by Vatsa and Carter are used in a quasi-steady manner and preliminary results are presented.

  4. D0 Silicon Upgrade: Lower Cleanroom Roof Quick Load Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucinski, Russ; /Fermilab

    1995-11-17

    This engineering note documents calculations done to determine the margin of safety for the lower clean room roof. The analysis was done to give me a feeling of what the loads, stresses and capacity of the roof is prior to installation and installation work to be done for the helium refrigerator upgrade. The result of this quick look showed that the calculated loads produce stress values and loads at about half the allowables. Based on this result, I do not think that special precautions above personal judgement are required for the installation work.

  5. Settling of Inclusions in Holding Furnaces: Modeling and Experimental Results

    NASA Astrophysics Data System (ADS)

    Sztur, C.; Balestreri, F.; Meyer, JL.; Hannart, B.

    Description of settling phenomena usually refers to falling particles in a liquid, following Stokes law. But the thermal convection always takes place in holding furnaces due to temperature heterogeneity, and the behaviour of the inclusions can be dramatically influenced by the liquid metal motion. A numerical model based on turbulent fluid flow calculations in an holding furnace and on trajectories calculations of a family of inclusions has been developed. Results are compared with experiments on a lab. scale and on an industrial scale furnace. An analysis of the governing parameters will be presented.

  6. An Upwind Multigrid Algorithm for Calculating Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.

    1993-01-01

    An algorithm is described that calculates inviscid, laminar, and turbulent flows on triangular meshes with an upwind discretization. A brief description of the base solver and the multigrid implementation is given, followed by results that consist mainly of convergence rates for inviscid and viscous flows over a NACA four-digit airfoil section. The results show that multigrid does accelerate convergence when the same relaxation parameters that yield good single-grid performance are used; however, larger gains in performance can be realized by doing less work in the relaxation scheme.

  7. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  8. Structural response to discrete and continuous gusts of an airplane having wing bending flexibility and a correlation of calculated and flight results

    NASA Technical Reports Server (NTRS)

    Houbolt, John C; Kordes, Eldon E

    1954-01-01

    An analysis is made of the structural response to gusts of an airplane having the degrees of freedom of vertical motion and wing bending flexibility and basic parameters are established. A convenient and accurate numerical solution of the response equations is developed for the case of discrete-gust encounter, an exact solution is made for the simpler case of continuous-sinusoidal-gust encounter, and the procedure is outlined for treating the more realistic condition of continuous random atmospheric turbulence, based on the methods of generalized harmonic analysis. Correlation studies between flight and calculated results are then given to evaluate the influence of wing bending flexibility on the structural response to gusts of two twin-engine transports and one four-engine bomber. It is shown that calculated results obtained by means of a discrete-gust approach reveal the general nature of the flexibility effects and lead to qualitative correlation with flight results. In contrast, calculations by means of the continuous-turbulence approach show good quantitative correlation with flight results and indicate a much greater degree of resolution of the flexibility effects.

  9. 6Li in a three-body model with realistic Forces: Separable versus nonseparable approach

    NASA Astrophysics Data System (ADS)

    Hlophe, L.; Lei, Jin; Elster, Ch.; Nogga, A.; Nunes, F. M.

    2017-12-01

    Background: Deuteron induced reactions are widely used to probe nuclear structure and astrophysical information. Those (d ,p ) reactions may be viewed as three-body reactions and described with Faddeev techniques. Purpose: Faddeev equations in momentum space have a long tradition of utilizing separable interactions in order to arrive at sets of coupled integral equations in one variable. However, it needs to be demonstrated that their solution based on separable interactions agrees exactly with solutions based on nonseparable forces. Methods: Momentum space Faddeev equations are solved with nonseparable and separable forces as coupled integral equations. Results: The ground state of 6Li is calculated via momentum space Faddeev equations using the CD-Bonn neutron-proton force and a Woods-Saxon type neutron(proton)-4He force. For the latter the Pauli-forbidden S -wave bound state is projected out. This result is compared to a calculation in which the interactions in the two-body subsystems are represented by separable interactions derived in the Ernst-Shakin-Thaler (EST) framework. Conclusions: We find that calculations based on the separable representation of the interactions and the original interactions give results that agree to four significant figures for the binding energy, provided that energy and momentum support points of the EST expansion are chosen independently. The momentum distributions computed in both approaches also fully agree with each other.

  10. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  11. Jobs and Renewable Energy Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterzinger, George

    2006-12-19

    Early in 2002, REPP developed the Jobs Calculator, a tool that calculates the number of direct jobs resulting from renewable energy development under RPS (Renewable Portfolio Standard) legislation or other programs to accelerate renewable energy development. The calculator is based on a survey of current industry practices to assess the number and type of jobs that will result from the enactment of a RPS. This project built upon and significantly enhanced the initial Jobs Calculator model by (1) expanding the survey to include other renewable technologies (the original model was limited to wind, solar PV and biomass co-firing technologies); (2)more » more precisely calculating the economic development benefits related to renewable energy development; (3) completing and regularly updating the survey of the commercially active renewable energy firms to determine kinds and number of jobs directly created; and (4) developing and implementing a technology to locate where the economic activity related to each type of renewable technology is likely to occur. REPP worked directly with groups in the State of Nevada to interpret the results and develop policies to capture as much of the economic benefits as possible for the state through technology selection, training program options, and outreach to manufacturing groups.« less

  12. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  13. Pressure algorithm for elliptic flow calculations with the PDF method

    NASA Technical Reports Server (NTRS)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  14. Static and dynamic structural-sensitivity derivative calculations in the finite-element-based Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.; Adelman, H. M.

    1984-01-01

    The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.

  15. Communication: Rotational excitation of HCl by H: Rigid rotor vs. reactive approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lique, François, E-mail: francois.lique@univ-lehavre.fr

    2015-06-28

    We report fully quantum time-independent calculations of cross sections for the collisional excitation of HCl by H, an astrophysically relevant process. Our calculations are based on the Bian-Werner ClH{sub 2} potential energy surface and include the possibility of HCl destruction through reactive collisions. The strongest collision-induced rotational HCl transitions are those with Δj = 1, and the magnitude of the HCl-H inelastic cross sections is of the same order of magnitude as the HCl-H{sub 2} ones. Results of exact calculations, i.e., including the reactive channels, are compared to pure inelastic calculations based on the rigid rotor approximation. A very goodmore » agreement is found between the two approaches over the whole energy range 10–3000 cm{sup −1}. At the highest collisional energies, where the reaction takes place, the rigid rotor approach slightly overestimates the cross sections, as expected. Hence, the rigid rotor approach is found to be reliable at interstellar temperatures.« less

  16. The application of tailor-made force fields and molecular dynamics for NMR crystallography: a case study of free base cocaine

    PubMed Central

    Neumann, Marcus A.

    2017-01-01

    Motional averaging has been proven to be significant in predicting the chemical shifts in ab initio solid-state NMR calculations, and the applicability of motional averaging with molecular dynamics has been shown to depend on the accuracy of the molecular mechanical force field. The performance of a fully automatically generated tailor-made force field (TMFF) for the dynamic aspects of NMR crystallography is evaluated and compared with existing benchmarks, including static dispersion-corrected density functional theory calculations and the COMPASS force field. The crystal structure of free base cocaine is used as an example. The results reveal that, even though the TMFF outperforms the COMPASS force field for representing the energies and conformations of predicted structures, it does not give significant improvement in the accuracy of NMR calculations. Further studies should direct more attention to anisotropic chemical shifts and development of the method of solid-state NMR calculations. PMID:28250956

  17. Thermodynamic Properties and Transport Coefficients of Nitrogen, Hydrogen and Helium Plasma Mixed with Silver Vapor

    NASA Astrophysics Data System (ADS)

    Zhou, Xue; Cui, Xinglei; Chen, Mo; Zhai, Guofu

    2016-05-01

    Species composites of Ag-N2, Ag-H2 and Ag-He plasmas in the temperature range of 3,000-20,000 K and at 1 atmospheric pressure were calculated by using the minimization of Gibbs free energy. Thermodynamic properties and transport coefficients of nitrogen, hydrogen and helium plasmas mixed with a variety of silver vapor were then calculated based on the equilibrium composites and collision integral data. The calculation procedure was verified by comparing the results obtained in this paper with the published transport coefficients on the case of pure nitrogen plasma. The influences of the silver vapor concentration on composites, thermodynamic properties and transport coefficients were finally analyzed and summarized for all the three types of plasmas. Those physical properties were important for theoretical study and numerical calculation on arc plasma generated by silver-based electrodes in those gases in sealed electromagnetic relays and contacts. supported by National Natural Science Foundation of China (Nos. 51277038 and 51307030)

  18. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  19. Lithium target performance evaluation for low-energy accelerator-based in vivo measurements using gamma spectroscopy.

    PubMed

    Aslam; Prestwich, W V; McNeill, F E

    2003-03-01

    The operating conditions at McMaster KN Van de Graaf accelerator have been optimized to produce neutrons via the (7)Li(p, n)(7)Be reaction for in vivo neutron activation analysis. In a number of earlier studies (development of an accelerator based system for in vivo neutron activation analysis measurements of manganese in humans, Ph.D. Thesis, McMaster University, Hamilton, ON, Canada; Appl. Radiat. Isot. 53 (2000) 657; in vivo measurement of some trace elements in human Bone, Ph.D. Thesis. McMaster University, Hamilton, ON, Canada), a significant discrepancy between the experimental and the calculated neutron doses has been pointed out. The hypotheses formulated in the above references to explain the deviation of the experimental results from analytical calculations, have been tested experimentally. The performance of the lithium target for neutron production has been evaluated by measuring the (7)Be activity produced as a result of (p, n) interaction with (7)Li. In contradiction to the formulated hypotheses, lithium target performance was found to be mainly affected by inefficient target cooling and the presence of oxides layer on target surface. An appropriate choice of these parameters resulted in neutron yields same as predicated by analytical calculations.

  20. Modified Laser Flash Method for Thermal Properties Measurements and the Influence of Heat Convection

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalia N.; Su, Ching-Hua; Lehoczky, Sandor L.

    2003-01-01

    The study examined the effect of natural convection in applying the modified laser flash method to measure thermal properties of semiconductor melts. Common laser flash method uses a laser pulse to heat one side of a thin circular sample and measures the temperature response of the other side. Thermal diffusivity can be calculations based on a heat conduction analysis. For semiconductor melt, the sample is contained in a specially designed quartz cell with optical windows on both sides. When laser heats the vertical melt surface, the resulting natural convection can introduce errors in calculation based on heat conduction model alone. The effect of natural convection was studied by CFD simulations with experimental verification by temperature measurement. The CFD results indicated that natural convection would decrease the time needed for the rear side to reach its peak temperature, and also decrease the peak temperature slightly in our experimental configuration. Using the experimental data, the calculation using only heat conduction model resulted in a thermal diffusivity value is about 7.7% lower than that from the model with natural convection. Specific heat capacity was about the same, and the difference is within 1.6%, regardless of heat transfer models.

Top