Sample records for comparative calculation method

  1. The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Cui, S. T.; Cummings, P. T.; Cochran, H. D.

    This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  3. Calculation and research of electrical characteristics of induction crucible furnaces with unmagnetized conductive crucible

    NASA Astrophysics Data System (ADS)

    Fedin, M. A.; Kuvaldin, A. B.; Kuleshov, A. O.; Zhmurko, I. Y.; Akhmetyanov, S. V.

    2018-01-01

    Calculation methods for induction crucible furnaces with a conductive crucible have been reviewed and compared. The calculation method of electrical and energy characteristics of furnaces with a conductive crucible has been developed and the example of the calculation is shown below. The calculation results are compared with experimental data. Dependences of electrical and power characteristics of the furnace on frequency, inductor current, geometric dimensions and temperature have been obtained.

  4. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  5. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  6. Comparison of quantitatively analyzed dynamic area-detector CT using various mathematic methods with FDG PET/CT in management of solitary pulmonary nodules.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro

    2013-06-01

    The objective of our study was to prospectively compare the capability of dynamic area-detector CT analyzed with different mathematic methods and PET/CT in the management of pulmonary nodules. Fifty-two consecutive patients with 96 pulmonary nodules underwent dynamic area-detector CT, PET/CT, and microbacterial or pathologic examinations. All nodules were classified into the following groups: malignant nodules (n = 57), benign nodules with low biologic activity (n = 15), and benign nodules with high biologic activity (n = 24). On dynamic area-detector CT, the total, pulmonary arterial, and systemic arterial perfusions were calculated using the dual-input maximum slope method; perfusion was calculated using the single-input maximum slope method; and extraction fraction and blood volume (BV) were calculated using the Patlak plot method. All indexes were statistically compared among the three nodule groups. Then, receiver operating characteristic analyses were used to compare the diagnostic capabilities of the maximum standardized uptake value (SUVmax) and each perfusion parameter having a significant difference between malignant and benign nodules. Finally, the diagnostic performances of the indexes were compared by means of the McNemar test. No adverse effects were observed in this study. All indexes except extraction fraction and BV, both of which were calculated using the Patlak plot method, showed significant differences among the three groups (p < 0.05). Areas under the curve of total perfusion calculated using the dual-input method, pulmonary arterial perfusion calculated using the dual-input method, and perfusion calculated using the single-input method were significantly larger than that of SUVmax (p < 0.05). The accuracy of total perfusion (83.3%) was significantly greater than the accuracy of the other indexes: pulmonary arterial perfusion (72.9%, p < 0.05), systemic arterial perfusion calculated using the dual-input method (69.8%, p < 0.05), perfusion (66.7%, p < 0.05), and SUVmax (60.4%, p < 0.05). Dynamic area-detector CT analyzed using the dual-input maximum slope method has better potential for the diagnosis of pulmonary nodules than dynamic area-detector CT analyzed using other methods and than PET/CT.

  7. Neutron skyshine calculations with the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-10-01

    Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.

  8. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  9. Use of continuous and grab sample data for calculating total maximum daily load (TMDL) in agricultural watersheds.

    PubMed

    Gulati, Shelly; Stubblefield, Ashley A; Hanlon, Jeremy S; Spier, Chelsea L; Stringfellow, William T

    2014-03-01

    Measuring the discharge of diffuse pollution from agricultural watersheds presents unique challenges. Flows in agricultural watersheds, particularly in Mediterranean climates, can be predominately irrigation runoff and exhibit large diurnal fluctuation in both volume and concentration. Flow and pollutant concentrations in these smaller watersheds dominated by human activity do not conform to a normal distribution and it is not clear if parametric methods are appropriate or accurate for load calculations. The objective of this study was to compare the accuracy of five load estimation methods to calculate pollutant loads from agricultural watersheds. Calculation of loads using results from discrete (grab) samples was compared with the true-load computed using in situ continuous monitoring measurements. A new method is introduced that uses a non-parametric measure of central tendency (the median) to calculate loads (median-load). The median-load method was compared to more commonly used parametric estimation methods which rely on using the mean as a measure of central tendency (mean-load and daily-load), a method that utilizes the total flow volume (volume-load), and a method that uses measure of flow at the time of sampling (instantaneous-load). Using measurements from ten watersheds in the San Joaquin Valley of California, the average percent error compared to the true-load for total dissolved solids (TDS) was 7.3% for the median-load, 6.9% for the mean-load, 6.9% for the volume-load, 16.9% for the instantaneous-load, and 18.7% for the daily-load methods of calculation. The results of this study show that parametric methods are surprisingly accurate, even for data that have starkly non-normal distributions and are highly skewed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  11. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    PubMed Central

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  12. A semi-empirical method for calculating the pitching moment of bodies of revolution at low Mach numbers

    NASA Technical Reports Server (NTRS)

    Hopkins, Edward J

    1951-01-01

    A semiempirical method, in which potential theory is arbitrarily combined with an approximate viscous theory, for calculating the aerodynamic pitching moments for bodies of revolution is presented. The method can also be used for calculating the lift and drag forces. The calculated and experimental force and moment characteristics of 15 bodies of revolution are compared.

  13. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    PubMed

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  14. The comparison of fossil carbon fraction and greenhouse gas emissions through an analysis of exhaust gases from urban solid waste incineration facilities.

    PubMed

    Kim, Seungjin; Kang, Seongmin; Lee, Jeongwoo; Lee, Seehyung; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In this study, in order to understand accurate calculation of greenhouse gas emissions of urban solid waste incineration facilities, which are major waste incineration facilities, and problems likely to occur at this time, emissions were calculated by classifying calculation methods into 3 types. For the comparison of calculation methods, the waste characteristics ratio, dry substance content by waste characteristics, carbon content in dry substance, and (12)C content were analyzed; and in particular, CO2 concentration in incineration gases and (12)C content were analyzed together. In this study, 3 types of calculation methods were made through the assay value, and by using each calculation method, emissions of urban solid waste incineration facilities were calculated then compared. As a result of comparison, with Calculation Method A, which used the default value as presented in the IPCC guidelines, greenhouse gas emissions were calculated for the urban solid waste incineration facilities A and B at 244.43 ton CO2/day and 322.09 ton CO2/day, respectively. Hence, it showed a lot of difference from Calculation Methods B and C, which used the assay value of this study. It is determined that this was because the default value as presented in IPCC, as the world average value, could not reflect the characteristics of urban solid waste incineration facilities. Calculation Method B indicated 163.31 ton CO2/day and 230.34 ton CO2/day respectively for the urban solid waste incineration facilities A and B; also, Calculation Method C indicated 151.79 ton CO2/day and 218.99 ton CO2/day, respectively. This study intends to compare greenhouse gas emissions calculated using (12)C content default value provided by the IPCC (Intergovernmental Panel on Climate Change) with greenhouse gas emissions calculated using (12)C content and waste assay value that can reflect the characteristics of the target urban solid waste incineration facilities. Also, the concentration and (12)C content were calculated by directly collecting incineration gases of the target urban solid waste incineration facilities, and greenhouse gas emissions of the target urban solid waste incineration facilities through this survey were compared with greenhouse gas emissions, which used the previously calculated assay value of solid waste.

  15. Comparison of carbon and biomass estimation methods for European forests

    NASA Astrophysics Data System (ADS)

    Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert

    2015-04-01

    National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.

  16. Comparison of Methodologies Using Estimated or Measured Values of Total Corneal Astigmatism for Toric Intraocular Lens Power Calculation.

    PubMed

    Ferreira, Tiago B; Ribeiro, Paulo; Ribeiro, Filomena J; O'Neill, João G

    2017-12-01

    To compare the prediction error in the calculation of toric intraocular lenses (IOLs) associated with methods that estimate the power of the posterior corneal surface (ie, Barrett toric calculator and Abulafia-Koch formula) with that of methods that consider real measures obtained using Scheimpflug imaging: a software that uses vectorial calculation (Panacea toric calculator: http://www.panaceaiolandtoriccalculator.com) and a ray tracing software (PhacoOptics, Aarhus Nord, Denmark). In 107 eyes of 107 patients undergoing cataract surgery with toric IOL implantation (Acrysof IQ Toric; Alcon Laboratories, Inc., Fort Worth, TX), predicted residual astigmatism by each calculation method was compared with manifest refractive astigmatism. Prediction error in residual astigmatism was calculated using vector analysis. All calculation methods resulted in overcorrection of with-the-rule astigmatism and undercorrection of against-the-rule astigmatism. Both estimation methods resulted in lower mean and centroid astigmatic prediction errors, and a larger number of eyes within 0.50 diopters (D) of absolute prediction error than methods considering real measures (P < .001). Centroid prediction error (CPE) was 0.07 D at 172° for the Barrett toric calculator and 0.13 D at 174° for the Abulafia-Koch formula (combined with Holladay calculator). For methods using real posterior corneal surface measurements, CPE was 0.25 D at 173° for the Panacea calculator and 0.29 D at 171° for the ray tracing software. The Barrett toric calculator and Abulafia-Koch formula yielded the lowest astigmatic prediction errors. Directly evaluating total corneal power for toric IOL calculation was not superior to estimating it. [J Refract Surg. 2017;33(12):794-800.]. Copyright 2017, SLACK Incorporated.

  17. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  18. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant

    ERIC Educational Resources Information Center

    Beach, Darrell H.

    1969-01-01

    Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)

  19. A comparison study of size-specific dose estimate calculation methods.

    PubMed

    Parikh, Roshni A; Wien, Michael A; Novak, Ronald D; Jordan, David W; Klahr, Paul; Soriano, Stephanie; Ciancibello, Leslie; Berlin, Sheila C

    2018-01-01

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ c ) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI vol, there was poor correlation, ρ c <0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide acceptable dose estimates for pediatric patients <30 cm in body width. Body weight provides a quick and practical method to identify conversion factors that can be used to estimate SSDE with reasonable accuracy in pediatric patients with body width ≥20 cm.

  20. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  1. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    PubMed

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  2. Environment-based pin-power reconstruction method for homogeneous core calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less

  3. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  4. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  5. Calculation method for laser radar cross sections of rotationally symmetric targets.

    PubMed

    Cao, Yunhua; Du, Yongzhi; Bai, Lu; Wu, Zhensen; Li, Haiying; Li, Yanhui

    2017-07-01

    The laser radar cross section (LRCS) is a key parameter in the study of target scattering characteristics. In this paper, a practical method for calculating LRCSs of rotationally symmetric targets is presented. Monostatic LRCSs for four kinds of rotationally symmetric targets (cone, rotating ellipsoid, super ellipsoid, and blunt cone) are calculated, and the results verify the feasibility of the method. Compared with the results for the triangular patch method, the correctness of the method is verified, and several advantages of the method are highlighted. For instance, the method does not require geometric modeling and patch discretization. The method uses a generatrix model and double integral, and its calculation is concise and accurate. This work provides a theory analysis for the rapid calculation of LRCS for common basic targets.

  6. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-11-01

    Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  7. A method of solid-solid phase equilibrium calculation by molecular dynamics

    NASA Astrophysics Data System (ADS)

    Karavaev, A. V.; Dremov, V. V.

    2016-12-01

    A method for evaluation of solid-solid phase equilibrium curves in molecular dynamics simulation for a given model of interatomic interaction is proposed. The method allows to calculate entropies of crystal phases and provides an accuracy comparable with that of the thermodynamic integration method by Frenkel and Ladd while it is much simpler in realization and less intense computationally. The accuracy of the proposed method was demonstrated in MD calculations of entropies for EAM potential for iron and for MEAM potential for beryllium. The bcc-hcp equilibrium curves for iron calculated for the EAM potential by the thermodynamic integration method and by the proposed one agree quite well.

  8. A Method for Calculating Transient Surface Temperatures and Surface Heating Rates for High-Speed Aircraft

    NASA Technical Reports Server (NTRS)

    Quinn, Robert D.; Gong, Leslie

    2000-01-01

    This report describes a method that can calculate transient aerodynamic heating and transient surface temperatures at supersonic and hypersonic speeds. This method can rapidly calculate temperature and heating rate time-histories for complete flight trajectories. Semi-empirical theories are used to calculate laminar and turbulent heat transfer coefficients and a procedure for estimating boundary-layer transition is included. Results from this method are compared with flight data from the X-15 research vehicle, YF-12 airplane, and the Space Shuttle Orbiter. These comparisons show that the calculated values are in good agreement with the measured flight data.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; University of Missouri, Columbia, MO; Chen, H

    Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less

  10. Evaluation of three aging techniques and back-calculated growth for introduced Blue Catfish from Lake Oconee, Georgia

    USGS Publications Warehouse

    Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.

    2015-01-01

    Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.

  11. The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.

    PubMed

    Coleman, S G; Hale, T

    1998-08-01

    Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.

  12. A new approach for the calculation of response spectral density of a linear stationary random multidegree of freedom system

    NASA Astrophysics Data System (ADS)

    Sharan, A. M.; Sankar, S.; Sankar, T. S.

    1982-08-01

    A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.

  13. The Calculation of Potential Energy Curves of Diatomic Molecules: The RKR Method.

    ERIC Educational Resources Information Center

    Castano, F.; And Others

    1983-01-01

    The RKR method for determining accurate potential energy curves is described. Advantages of using the method (compared to Morse procedure) and a TRS-80 computer program which calculates the classical turning points by an RKR method are also described. The computer program is available from the author upon request. (Author/JN)

  14. Semiempirical and DFT Investigations of the Dissociation of Alkyl Halides

    ERIC Educational Resources Information Center

    Waas, Jack R.

    2006-01-01

    Enthalpy changes corresponding to the gas phase heats of dissociation of 12 organic halides were calculated using two semiempirical methods, the Hartree-Fock method, and two DFT methods. These calculated values were compared to experimental values where possible. All five methods agreed generally with the expected empirically known trends in the…

  15. The experimental and calculated characteristics of 22 tapered wings

    NASA Technical Reports Server (NTRS)

    Anderson, Raymond F

    1938-01-01

    The experimental and calculated aerodynamic characteristics of 22 tapered wings are compared, using tests made in the variable-density wind tunnel. The wings had aspect ratios from 6 to 12 and taper ratios from 1:6:1 and 5:1. The compared characteristics are the pitching moment, the aerodynamic-center position, the lift-curve slope, the maximum lift coefficient, and the curves of drag. The method of obtaining the calculated values is based on the use of wing theory and experimentally determined airfoil section data. In general, the experimental and calculated characteristics are in sufficiently good agreement that the method may be applied to many problems of airplane design.

  16. The mutual inductance calculation between circular and quadrilateral coils at arbitrary attitudes using a rotation matrix for airborne transient electromagnetic systems

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Wang, Hongyuan; Lin, Jun; Guan, Shanshan; Feng, Xue; Li, Suyi

    2014-12-01

    Performance testing and calibration of airborne transient electromagnetic (ATEM) systems are conducted to obtain the electromagnetic response of ground loops. It is necessary to accurately calculate the mutual inductance between transmitting coils, receiving coils and ground loops to compute the electromagnetic responses. Therefore, based on Neumann's formula and the measured attitudes of the coils, this study deduces the formula for the mutual inductance calculation between circular and quadrilateral coils, circular and circular coils, and quadrilateral and quadrilateral coils using a rotation matrix, and then proposes a method to calculate the mutual inductance between two coils at arbitrary attitudes (roll, pitch, and yaw). Using coil attitude simulated data of an ATEM system, we calculate the mutual inductance of transmitting coils and ground loops at different attitudes, analyze the impact of coil attitudes on mutual inductance, and compare the computational accuracy and speed of the proposed method with those of other methods using the same data. The results show that the relative error of the calculation is smaller and that the speed-up is significant compared to other methods. Moreover, the proposed method is also applicable to the mutual inductance calculation of polygonal and circular coils at arbitrary attitudes and is highly expandable.

  17. Calculating osmotic pressure of glucose solutions according to ASOG model and measuring it with air humidity osmometry.

    PubMed

    Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong

    2016-09-01

    The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.

  18. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  19. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  20. Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, D.R.

    1986-05-27

    This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less

  1. Calculation of the Coulomb Fission Cross Sections for Pb-Pb and Bi-Pb Interactions at 158 A GeV

    NASA Technical Reports Server (NTRS)

    Poyser, William J.; Ahern, Sean C.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    The Weizsacker-Williams (WW) method of virtual quanta is used to make approximate cross section calculations for peripheral relativistic heavy-ion collisions. We calculated the Coulomb fission cross sections for projectile ions of Pb-208 and Bi-209 with energies of 158 A GeV interacting with a Pb-208 target. We also calculated the electromagnetic absorption cross section for Pb-208 ion interacting as described. For comparison we use both the full WW method and a standard approximate WW method. The approximate WW method in larger cross sections compared to the more accurate full WW method.

  2. Methods for the Calculation of Settling Tanks for Batch Experiments; METODOS DE CALCULO DE ESPESADORES POR ENSAYOS DISCONTINUOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasos, P.; Perea, C.P.; Jodra, L.G.

    1957-01-01

    >In order to calculate settling tanks, some tests on batch sedimentation were made, and with the data obtained the dimensions of the settling tank were found. The mechanism of sedimentation is first briefly described, and then the factors involved in the calculation of the dimensions and the sedimentation velocity are discussed. The Cloe and Clevenger method and the Kynch method were investigated experimentally and compared. The application of the calculations are illustrated. It is shown that the two methods gave markedly different results. (J.S.R.)

  3. Resonances for Symmetric Two-Barrier Potentials

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We describe a method for the accurate calculation of bound-state and resonance energies for one-dimensional potentials. We calculate the shape resonances for symmetric two-barrier potentials and compare them with those coming from the Siegert approximation, the complex scaling method and the box-stabilization method. A comparison of the…

  4. Frequency-domain multiscale quantum mechanics/electromagnetics simulation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung, E-mail: yamcy@yangtze.hku.hk, E-mail: ghc@everest.hku.hk

    A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 1190–1199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate themore » information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.« less

  5. A revised method for calculation of life expectancy tables from individual death records which provides increased accuracy at advanced ages.

    PubMed

    Mathisen, R W; Mazess, R B

    1981-02-01

    The authors present a revised method for calculating life expectancy tables for populations where individual ages at death are known or can be estimated. The conventional and revised methods are compared using data for U.S. and Hungarian males in an attempt to determine the accuracy of each method in calculating life expectancy at advanced ages. Means of correcting errors caused by age rounding, age exaggeration, and infant mortality are presented

  6. An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines

    NASA Technical Reports Server (NTRS)

    Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng

    2014-01-01

    We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.

  7. Critical Analysis of Existing Recyclability Assessment Methods for New Products in Order to Define a Reference Method

    NASA Astrophysics Data System (ADS)

    Maris, E.; Froelich, D.

    The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.

  8. Applications of a General Finite-Difference Method for Calculating Bending Deformations of Solid Plates

    NASA Technical Reports Server (NTRS)

    Walton, William C., Jr.

    1960-01-01

    This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.

  9. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  10. Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.

    PubMed

    Gremba, Allison; Weinberg, Seth M

    2018-05-09

    We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.

  11. Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2004-01-01

    We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  12. A simple method for the fast calculation of charge redistribution of solutes in an implicit solvent model

    NASA Astrophysics Data System (ADS)

    Dias, L. G.; Shimizu, K.; Farah, J. P. S.; Chaimovich, H.

    2002-09-01

    We propose and demonstrate the usefulness of a method, defined as generalized Born electronegativity equalization method (GBEEM) to estimate solvent-induced charge redistribution. The charges obtained by GBEEM, in a representative series of small organic molecules, were compared to PM3-CM1 charges in vacuum and in water. Linear regressions with appropriate correlation coefficients and standard deviations between GBEEM and PM3-CM1 methods were obtained ( R=0.94,SD=0.15, Ftest=234, N=32, in vacuum; R=0.94,SD=0.16, Ftest=218, N=29, in water). In order to test the GBEEM response when intermolecular interactions are involved we calculated a water dimer in dielectric water using both GBEEM and PM3-CM1 and the results were similar. Hence, the method developed here is comparable to established calculation methods.

  13. The structure, vibrational spectra and nonlinear optical properties of the L-lysine × tartaric acid complex—Theoretical studies

    NASA Astrophysics Data System (ADS)

    Drozd, M.; Marchewka, M. K.

    2006-05-01

    The room temperature X-ray studies of L-lysine × tartaric acid complex are not unambiguous. The disorder of three atoms of carbon in L-lysine molecule is observed. These X-ray studies are ambiguous. The theoretical geometry study performed by DFT methods explain the most doubts which are connected with crystallographic measurements. The theoretical vibrational frequencies and potential energy distribution (PED) of L-lysine × tartaric acid were calculated by B3LYP method. The calculated frequencies were compared with experimental measured IR spectra. The complete assignment of the bands has been made on the basis of the calculated PED. The restricted Hartee-Fock (RHF) methods were used for calculation of the hyperpolarizability for investigated compound. The theoretical results are compared with experimental value of β.

  14. Comparative evaluation of different methods for calculation of cerebral blood flow (CBF) in nonanesthetized rabbits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angelini, G.; Lanza, E.; Rozza Dionigi, A.

    1983-05-01

    The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less

  15. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  16. Program VSAERO theory document: A computer program for calculating nonlinear aerodynamic characteristics of arbitrary configurations

    NASA Technical Reports Server (NTRS)

    Maskew, Brian

    1987-01-01

    The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.

  17. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  18. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  19. Clothing Protection from Ultraviolet Radiation: A New Method for Assessment.

    PubMed

    Gage, Ryan; Leung, William; Stanley, James; Reeder, Anthony; Barr, Michelle; Chambers, Tim; Smith, Moira; Signal, Louise

    2017-11-01

    Clothing modifies ultraviolet radiation (UVR) exposure from the sun and has an impact on skin cancer risk and the endogenous synthesis of vitamin D. There is no standardized method available for assessing body surface area (BSA) covered by clothing, which limits generalizability between study findings. We calculated the body cover provided by 38 clothing items using diagrams of BSA, adjusting the values to account for differences in BSA by age. Diagrams displaying each clothing item were developed and incorporated into a coverage assessment procedure (CAP). Five assessors used the CAP and Lund & Browder chart, an existing method for estimating BSA, to calculate the clothing coverage of an image sample of 100 schoolchildren. Values of clothing coverage, inter-rater reliability and assessment time were compared between CAP and Lund & Browder methods. Both methods had excellent inter-rater reliability (>0.90) and returned comparable results, although the CAP method was significantly faster in determining a person's clothing coverage. On balance, the CAP method appears to be a feasible method for calculating clothing coverage. Its use could improve comparability between sun-safety studies and aid in quantifying the health effects of UVR exposure. © 2017 The American Society of Photobiology.

  20. A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.

  1. Automated Transition State Theory Calculations for High-Throughput Kinetics.

    PubMed

    Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H

    2017-09-21

    A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.

  2. Calculating length of gestation from the Society for Assisted Reproductive Technology Clinic Outcome Reporting System (SART CORS) database versus vital records may alter reported rates of prematurity.

    PubMed

    Stern, Judy E; Kotelchuck, Milton; Luke, Barbara; Declercq, Eugene; Cabral, Howard; Diop, Hafsatou

    2014-05-01

    To compare length of gestation after assisted reproductive technology (ART) as calculated by three methods from the Society for Assisted Reproductive Technology Clinic Outcome Reporting System (SART CORS) and vital records (birth and fetal death) in the Massachusetts Pregnancy to Early Life Longitudinal Data System (PELL). Historical cohort study. Database linkage analysis. Live or stillborn deliveries. None. ART deliveries were linked to live birth or fetal death certificates. Length of gestation in 7,171 deliveries from fresh autologous ART cycles (2004-2008) was calculated and compared with that of SART CORS with the use of methods: M1 = outcome date - cycle start date; M2 = outcome date - transfer date + 17 days; and M3 = outcome date - transfer date + 14 days + day of transfer. Generalized estimating equation models were used to compare methods. Singleton and multiple deliveries were included. Overall prematurity (delivery <37 weeks) varied by method of calculation: M1 29.1%; M2 25.6%; M3 25.2%; and PELL 27.2%. The SART methods, M1-M3, varied from those of PELL by ≥ 3 days in >45% of deliveries and by more than 1 week in >22% of deliveries. Each method differed from each other. Estimates of preterm birth in ART vary depending on source of data and method of calculation. Some estimates may overestimate preterm birth rates for ART conceptions. Copyright © 2014 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  3. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  4. Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).

    PubMed

    Bag, Arijit; Ghorai, Pradip Kr

    2016-05-01

    Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Correlation and agreement between eplet mismatches calculated using serological, low-intermediate and high resolution molecular human leukocyte antigen typing methods.

    PubMed

    Fidler, Samantha; D'Orsogna, Lloyd; Irish, Ashley B; Lewis, Joshua R; Wong, Germaine; Lim, Wai H

    2018-03-02

    Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960-0.975) and 0.926 (95% CI 0.899-0.944), respectively; and 0.995 (95% CI 0.994-0.996) and 0.993 (95% CI 0.991-0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation.

  6. Optimizing electrostatic field calculations with the Adaptive Poisson-Boltzmann Solver to predict electric fields at protein-protein interfaces II: explicit near-probe and hydrogen-bonding water molecules.

    PubMed

    Ritchie, Andrew W; Webb, Lauren J

    2014-07-17

    We have examined the effects of including explicit, near-probe solvent molecules in a continuum electrostatics strategy using the linear Poisson-Boltzmann equation with the Adaptive Poisson-Boltzmann Solver (APBS) to calculate electric fields at the midpoint of a nitrile bond both at the surface of a monomeric protein and when docked at a protein-protein interface. Results were compared to experimental vibrational absorption energy measurements of the nitrile oscillator. We examined three methods for selecting explicit water molecules: (1) all water molecules within 5 Å of the nitrile nitrogen; (2) the water molecule closest to the nitrile nitrogen; and (3) any single water molecule hydrogen-bonding to the nitrile. The correlation between absolute field strengths with experimental absorption energies were calculated and it was observed that method 1 was only an improvement for the monomer calculations, while methods 2 and 3 were not significantly different from the purely implicit solvent calculations for all protein systems examined. Upon taking the difference in calculated electrostatic fields and comparing to the difference in absorption frequencies, we typically observed an increase in experimental correlation for all methods, with method 1 showing the largest gain, likely due to the improved absolute monomer correlations using that method. These results suggest that, unlike with quantum mechanical methods, when calculating absolute fields using entirely classical models, implicit solvent is typically sufficient and additional work to identify hydrogen-bonding or nearest waters does not significantly impact the results. Although we observed that a sphere of solvent near the field of interest improved results for relative field calculations, it should not be consider a panacea for all situations.

  7. Adaptive Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Fasnacht, Marc

    We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.

  8. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  9. Robust sleep quality quantification method for a personal handheld device.

    PubMed

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  10. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    NASA Astrophysics Data System (ADS)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  11. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  12. Comparison of methods for developing the dynamics of rigid-body systems

    NASA Technical Reports Server (NTRS)

    Ju, M. S.; Mansour, J. M.

    1989-01-01

    Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.

  13. An automated exploration of the isomerization and dissociation pathways of (E)-1,2-dichloroethene cations and anions

    NASA Astrophysics Data System (ADS)

    Kishimoto, Naoki; Nishi, Yuito

    2017-04-01

    Isomerization and dissociation pathways after the photoionization or electron attachment of (E)-1,2-dichloroethene were calculated with an automated exploration method utilizing a scaled hypersphere search of the anharmonic downward distortion following algorithm at the UB3LYP/6-311G(2d,d,p) level of theory. The potential energies of transition states and dissociation channels were calculated by a composite method ((RO)CBS-QB3) and compared with the breakdown diagrams and electron attachment spectra observed in previous spectroscopic studies. The results of single point calculations with several DFT and post-SCF methods are compared using the root mean square deviations from the (RO)CBS-QB3 energies for six states of anionic dichloroethene.

  14. A Method for Determining the Rate of Heat Transfer from a Wing or Streamline Body

    NASA Technical Reports Server (NTRS)

    Frick, Charles W; Mccullough, George B

    1945-01-01

    A method for calculating the rate of heat transfer from the surface of an airfoil or streamline body is presented. A comparison with the results of an experimental investigation indicates that the accuracy of the method is good. This method may be used to calculate the heat supply necessary for heat de-icing or in ascertaining the heat loss from the fuselage of an aircraft operating at great altitude. To illustrate the method, the total rate of heat transfer from an airfoil is calculated and compared with the experimental results.

  15. Comparison of the various methods for the direct calculation of the transmission functions of the 15-micron CO2 band with experimental data

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Various methods for calculating the transmission functions of the 15 micron CO2 band are described. The results of these methods are compared with laboratory measurements. It is found that program P4 provides the best agreement with experimental results on the average.

  16. Core follow calculation with the nTRACER numerical reactor and verification using power reactor measurement data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Y. S.; Joo, H. G.; Yoon, J. I.

    The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)

  17. Precipitating Condensation Clouds in Substellar Atmospheres

    NASA Technical Reports Server (NTRS)

    Ackerman, Andrew S.; Marley, Mark S.; Gore, Warren J. (Technical Monitor)

    2000-01-01

    We present a method to calculate vertical profiles of particle size distributions in condensation clouds of giant planets and brown dwarfs. The method assumes a balance between turbulent diffusion and precipitation in horizontally uniform cloud decks. Calculations for the Jovian ammonia cloud are compared with previous methods. An adjustable parameter describing the efficiency of precipitation allows the new model to span the range of predictions from previous models. Calculations for the Jovian ammonia cloud are found to be consistent with observational constraints. Example calculations are provided for water, silicate, and iron clouds on brown dwarfs and on a cool extrasolar giant planet.

  18. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  19. Temperature analysis with voltage-current time differential operation of electrochemical sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    A method for temperature analysis of a gas stream. The method includes identifying a temperature parameter of an affected waveform signal. The method also includes calculating a change in the temperature parameter by comparing the affected waveform signal with an original waveform signal. The method also includes generating a value from the calculated change which corresponds to the temperature of the gas stream.

  20. Prediction of Quality Change During Thawing of Frozen Tuna Meat by Numerical Calculation I

    NASA Astrophysics Data System (ADS)

    Murakami, Natsumi; Watanabe, Manabu; Suzuki, Toru

    A numerical calculation method has been developed to determine the optimum thawing method for minimizing the increase of metmyoglobin content (metMb%) as an indicator of color changes in frozen tuna meat during thawing. The calculation method is configured the following two steps: a) calculation of temperature history in each part of frozen tuna meat during thawing by control volume method under the assumption of one-dimensional heat transfer, and b) calculation of metMb% based on the combination of calculated temperature history, Arrenius equation and the first-order reaction equation for the increase rate of metMb%. Thawing experiments for measuring temperature history of frozen tuna meat were carried out under the conditions of rapid thawing and slow thawing to compare the experimental data with calculated temperature history as well as the increase of metMb%. The results were coincident with the experimental data. The proposed simulation method would be useful for predicting the optimum thawing conditions in terms of metMb%.

  1. A comparative experimental and quantum chemical study on monomeric and dimeric structures of 3,5-dibromoanthranilic acid.

    PubMed

    Karabacak, Mehmet; Cinar, Mehmet

    2012-10-01

    This study presents the structural and spectroscopic characterization of 3,5-dibromoanthranilic acid with help of experimental techniques (FT-IR, FT-Raman, UV, NMR) and quantum chemical calculations. The vibrational spectra of title compound were recorded in solid state with FT-IR and FT-Raman in the range of 4000-400 and 4000-50 cm(-1), respectively. The vibrational frequencies were also computed using B3LYP method of DFT with 6-311++G(d,p) basis set. The fundamental assignments were done on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanical (SQM) method. The (1)H, (13)C and DEPT NMR spectra were recorded in DMSO solution and calculated by gauge-invariant atomic orbitals (GIAO) method. The UV absorption spectra of the compound were recorded in the range of 200-400 nm in ethanol, water and DMSO solutions. Solvent effects were calculated using time-dependent density functional theory and CIS method. The ground state geometrical structure of compound was predicted by B3LYP method and compared with the crystallographic structure of similar compounds. All calculations were made for monomeric and dimeric structure of compound. Moreover, molecular electrostatic potential (MEP) and thermodynamic properties were performed. Mulliken atomic charges of neutral and anionic form of the molecule were computed and compared with anthranilic acid. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A method for calculating strut and splitter plate noise in exit ducts: Theory and verification

    NASA Technical Reports Server (NTRS)

    Fink, M. R.

    1978-01-01

    Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.

  3. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    PubMed

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.

  4. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  5. Fast calculation of the line-spread-function by transversal directions decoupling

    NASA Astrophysics Data System (ADS)

    Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra

    2016-07-01

    We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.

  6. Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating

    NASA Technical Reports Server (NTRS)

    Pickney, John T.; Milliken, Andrew H.

    2010-01-01

    An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours

  7. A New Approach for the Calculation of Total Corneal Astigmatism Considering the Magnitude and Orientation of Posterior Corneal Astigmatism and Thickness.

    PubMed

    Piñero, David P; Caballero, María T; Nicolás-Albujer, Juan M; de Fez, Dolores; Camps, Vicent J

    2018-06-01

    To evaluate a new method of calculation of total corneal astigmatism based on Gaussian optics and the power design of a spherocylindrical lens (C) in the healthy eye and to compare it with keratometric (K) and power vector (PV) methods. A total of 92 healthy eyes of 92 patients (age, 17-65 years) were enrolled. Corneal astigmatism was calculated in all cases using K, PV, and our new approach C that considers the contribution of corneal thickness. An evaluation of the interchangeability of our new approach with the other 2 methods was performed using Bland-Altman analysis. Statistically significant differences between methods were found in the magnitude of astigmatism (P < 0.001), with the highest values provided by K. These differences in the magnitude of astigmatism were clinically relevant when K and C were compared [limits of agreement (LoA), -0.40 to 0.62 D), but not for the comparison between PV and C (LoA, -0.03 to 0.01 D). Differences in the axis of astigmatism between methods did not reach statistical significance (P = 0.408). However, they were clinically relevant when comparing K and C (LoA, -5.48 to 15.68 degrees) but not for the comparison between PV and C (LoA, -1.68 to 1.42 degrees). The use of our new approach for the calculation of total corneal astigmatism provides astigmatic results comparable to the PV method, which suggests that the effect of pachymetry on total corneal astigmatism is minimal in healthy eyes.

  8. Calculating competition in thinned northern hardwoods.

    Treesearch

    Sharon A. Winsauer; James A. Mattson

    1992-01-01

    Describes four methods of calculating competition to individual trees and compares their effectiveness in explaining the 3-year growth response of northern hardwoods after various mechanized thinning practices.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isotalo, Aarno

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  10. A new method for calculating ecological flow: Distribution flow method

    NASA Astrophysics Data System (ADS)

    Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei

    2018-04-01

    A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.

  11. Calculation of the Maxwell stress tensor and the Poisson-Boltzmann force on a solvated molecular surface using hypersingular boundary integrals

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew

    2005-08-01

    The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.

  12. Surface Segregation Energies of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy method. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameterization. Quantum approximate segregation energies are computed with and without atomistic relaxation. The ab initio calculations are performed without relaxation for the most part, but predicted relaxations from quantum approximate calculations are used in selected cases to compute approximate relaxed ab initio segregation energies. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with other quantum approximate and ab initio theoretical work, and available experimental results.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less

  14. Analysis of a boron-carbide-drum-controlled critical reactor experiment

    NASA Technical Reports Server (NTRS)

    Mayo, W. T.

    1972-01-01

    In order to validate methods and cross sections used in the neutronic design of compact fast-spectrum reactors for generating electric power in space, an analysis of a boron-carbide-drum-controlled critical reactor was made. For this reactor the transport analysis gave generally satisfactory results. The calculated multiplication factor for the most detailed calculation was only 0.7-percent Delta k too high. Calculated reactivity worth of the control drums was $11.61 compared to measurements of $11.58 by the inverse kinetics methods and $11.98 by the inverse counting method. Calculated radial and axial power distributions were in good agreement with experiment.

  15. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  16. The Use of a Software-Assisted Method to Estimate Fetal Weight at and Near Term Using Magnetic Resonance Imaging.

    PubMed

    Kadji, Caroline; De Groof, Maxime; Camus, Margaux F; De Angelis, Riccardo; Fellas, Stéphanie; Klass, Magdalena; Cecotti, Vera; Dütemeyer, Vivien; Barakat, Elie; Cannie, Mieke M; Jani, Jacques C

    2017-01-01

    The aim of this study was to apply a semi-automated calculation method of fetal body volume and, thus, of magnetic resonance-estimated fetal weight (MR-EFW) prior to planned delivery and to evaluate whether the technique of measurement could be simplified while remaining accurate. MR-EFW was calculated using a semi-automated method at 38.6 weeks of gestation in 36 patients and compared to the picture archiving and communication system (PACS). Per patient, 8 sequences were acquired with a slice thickness of 4-8 mm and an intersection gap of 0, 4, 8, 12, 16, or 20 mm. The median absolute relative errors for MR-EFW and the time of planimetric measurements were calculated for all 8 sequences and for each method (assisted vs. PACS), and the difference between the methods was calculated. The median delivery weight was 3,280 g. The overall median relative error for all 288 MR-EFW calculations was 2.4% using the semi-automated method and 2.2% for the PACS method. Measurements did not differ between the 8 sequences using the assisted method (p = 0.313) or the PACS (p = 0.118), while the time of planimetric measurement decreased significantly with a larger gap (p < 0.001) and in the assisted method compared to the PACS method (p < 0.01). Our simplified MR-EFW measurement showed a dramatic decrease in time of planimetric measurement without a decrease in the accuracy of weight estimates. © 2017 S. Karger AG, Basel.

  17. Accuracy Test of the OPLS-AA Force Field for Calculating Free Energies of Mixing and Comparison with PAC-MAC

    PubMed Central

    2017-01-01

    We have calculated the excess free energy of mixing of 1053 binary mixtures with the OPLS-AA force field using two different methods: thermodynamic integration (TI) of molecular dynamics simulations and the Pair Configuration to Molecular Activity Coefficient (PAC-MAC) method. PAC-MAC is a force field based quasi-chemical method for predicting miscibility properties of various binary mixtures. The TI calculations yield a root mean squared error (RMSE) compared to experimental data of 0.132 kBT (0.37 kJ/mol). PAC-MAC shows a RMSE of 0.151 kBT with a calculation speed being potentially 1.0 × 104 times greater than TI. OPLS-AA force field parameters are optimized using PAC-MAC based on vapor–liquid equilibrium data, instead of enthalpies of vaporization or densities. The RMSE of PAC-MAC is reduced to 0.099 kBT by optimizing 50 force field parameters. The resulting OPLS-PM force field has a comparable accuracy as the OPLS-AA force field in the calculation of mixing free energies using TI. PMID:28418655

  18. Adaptive imaging through far-field turbulence

    NASA Astrophysics Data System (ADS)

    Troxel, Steven E.; Welsh, Byron M.; Roggemann, Michael C.

    1993-11-01

    This paper presents a new method for calculating the field angle dependent average OTF of an adaptive optic system and compares this method to calculations based on geometric optics. Geometric optics calculations are shown to be inaccurate due to the diffraction effects created by far-field turbulence and the approximations made in the atmospheric parameters. Our analysis includes diffraction effects and properly accounts for the effect of the atmospheric turbulence scale sizes. We show that for any atmospheric C(superscript 2)(subscript n) profile, the actual OTF is always better than the OTF calculated using geometric optics. The magnitude of the difference between the calculation methods is shown to be dependent on the amount of far- field turbulence and the values of the outer scale dimension.

  19. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  20. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  1. Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.

    PubMed

    Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M

    2016-12-01

    Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  3. Use of a variational moment method in calculating propagation constants for waveguides with an arbitrary index profile.

    PubMed

    Hardy, A; Itzkowitz, M; Griffel, G

    1989-05-15

    A variational moment method is used to calculate propagation constants of 1-D optical waveguides with an arbitrary index profile. The method is applicable to 2-D waveguides as well, and the index profiles need not be symmetric. Examples are given for the lowest-order and the next higher-order modes and are compared with exact numerical solutions.

  4. Comparison of Three Methods of Calculation, Experimental and Monte Carlo Simulation in Investigation of Organ Doses (Thyroid, Sternum, Cervical Vertebra) in Radioiodine Therapy

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Ayat, Saba

    2012-01-01

    Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806

  5. Calculation of short-wave signal amplitude on the basis of the waveguide approach and the method of characteristics

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. Ia.; Tumatov, K. I.

    The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.

  6. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  7. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  8. SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Yang, K

    2016-06-15

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  9. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  10. Dynamic Stark broadening as the Dicke narrowing effect

    NASA Astrophysics Data System (ADS)

    Calisti, A.; Mossé, C.; Ferri, S.; Talin, B.; Rosmej, F.; Bureyeva, L. A.; Lisitsa, V. S.

    2010-01-01

    A very fast method to account for charged particle dynamics effects in calculations of spectral line shape emitted by plasmas is presented. This method is based on a formulation of the frequency fluctuation model (FFM), which provides an expression of the dynamic line shape as a functional of the static distribution of frequencies. Thus, the main numerical work rests on the calculation of the quasistatic Stark profile. This method for taking into account ion dynamics allows a very fast and accurate calculation of Stark broadening of atomic hydrogen high- n series emission lines. It is not limited to hydrogen spectra. Results on helium- β and Lyman- α lines emitted by argon in microballoon implosion experiment conditions compared with experimental data and simulation results are also presented. The present approach reduces the computer time by more than 2 orders of magnitude as compared with the original FFM with an improvement of the calculation precision, and it opens broad possibilities for its application in spectral line-shape codes.

  11. Calculation of power spectrums from digital time series with missing data points

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.

    1980-01-01

    Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.

  12. Correlation and agreement between eplet mismatches calculated using serological, low-intermediate and high resolution molecular human leukocyte antigen typing methods

    PubMed Central

    Fidler, Samantha; D’Orsogna, Lloyd; Irish, Ashley B.; Lewis, Joshua R.; Wong, Germaine; Lim, Wai H.

    2018-01-01

    Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960–0.975) and 0.926 (95% CI 0.899–0.944), respectively; and 0.995 (95% CI 0.994–0.996) and 0.993 (95% CI 0.991–0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation. PMID:29568344

  13. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services

    PubMed Central

    Rajabi, A; Dabiri, A

    2012-01-01

    Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171

  14. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  15. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  16. Experimental verification of a CT-based Monte Carlo dose-calculation method in heterogeneous phantoms.

    PubMed

    Wang, L; Lovelock, M; Chui, C S

    1999-12-01

    To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.

  17. Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.

    PubMed

    Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu

    2014-07-08

    The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.

  18. On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method

    NASA Astrophysics Data System (ADS)

    Schmitz, F.; Fleck, B.

    1993-11-01

    The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.

  19. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  20. Low cost estimation of the contribution of post-CCSD excitations to the total atomization energy using density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Sánchez, H. R.; Pis Diez, R.

    2016-04-01

    Based on the Aλ diagnostic for multireference effects recently proposed [U.R. Fogueri, S. Kozuch, A. Karton, J.M. Martin, Theor. Chem. Acc. 132 (2013) 1], a simple method for improving total atomization energies and reaction energies calculated at the CCSD level of theory is proposed. The method requires a CCSD calculation and two additional density functional theory calculations for the molecule. Two sets containing 139 and 51 molecules are used as training and validation sets, respectively, for total atomization energies. An appreciable decrease in the mean absolute error from 7-10 kcal mol-1 for CCSD to about 2 kcal mol-1 for the present method is observed. The present method provides atomization energies and reaction energies that compare favorably with relatively recent scaled CCSD methods.

  1. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    PubMed

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  2. Calculating the nutrient composition of recipes with computers.

    PubMed

    Powers, P M; Hoover, L W

    1989-02-01

    The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.

  3. Optimized Vertex Method and Hybrid Reliability

    NASA Technical Reports Server (NTRS)

    Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.

    2002-01-01

    A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.

  4. Real-Time Stability Margin Measurements for X-38 Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Stachowiak, Susan J.

    2005-01-01

    A method has been developed for real-time stability margin measurement calculations. The method relies on a tailored-forced excitation targeted to a specific frequency range. Computation of the frequency response is matched to the specific frequencies contained in the excitation. A recursive Fourier transformation is used to make the method compatible with real-time calculation. The method was incorporated into the X-38 nonlinear simulation and applied to an X-38 robustness test. X-38 stability margins were calculated for different variations in aerodynamic and mass properties over the vehicle flight trajectory. The new method showed results comparable to more traditional stability analysis techniques, and at the same time, this new method provided coverage that is more complete and increased efficiency.

  5. Simplified method for calculating shear deflections of beams.

    Treesearch

    I. Orosz

    1970-01-01

    When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...

  6. Accurate calculation of the geometric measure of entanglement for multipartite quantum states

    NASA Astrophysics Data System (ADS)

    Teng, Peiyuan

    2017-07-01

    This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.

  7. Implementation of density functional theory method on object-oriented programming (C++) to calculate energy band structure using the projector augmented wave (PAW)

    NASA Astrophysics Data System (ADS)

    Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.

    2017-05-01

    This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.

  8. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  9. A comparison of the finite difference and finite element methods for heat transfer calculations

    NASA Technical Reports Server (NTRS)

    Emery, A. F.; Mortazavi, H. R.

    1982-01-01

    The finite difference method and finite element method for heat transfer calculations are compared by describing their bases and their application to some common heat transfer problems. In general it is noted that neither method is clearly superior, and in many instances, the choice is quite arbitrary and depends more upon the codes available and upon the personal preference of the analyst than upon any well defined advantages of one method. Classes of problems for which one method or the other is better suited are defined.

  10. Nuclear radiation environment analysis for thermoelectric outer planet spacecraft

    NASA Technical Reports Server (NTRS)

    Davis, H. S.; Koprowski, E. F.

    1972-01-01

    Neutron and gamma ray transport calculations were performed using Monte Carlo methods and a three-dimensional geometric model of the spacecraft. The results are compared with similar calculations performed for an earlier design.

  11. GeneCount: genome-wide calculation of absolute tumor DNA copy numbers from array comparative genomic hybridization data

    PubMed Central

    Lyng, Heidi; Lando, Malin; Brøvig, Runar S; Svendsrud, Debbie H; Johansen, Morten; Galteland, Eivind; Brustugun, Odd T; Meza-Zepeda, Leonardo A; Myklebost, Ola; Kristensen, Gunnar B; Hovig, Eivind; Stokke, Trond

    2008-01-01

    Absolute tumor DNA copy numbers can currently be achieved only on a single gene basis by using fluorescence in situ hybridization (FISH). We present GeneCount, a method for genome-wide calculation of absolute copy numbers from clinical array comparative genomic hybridization data. The tumor cell fraction is reliably estimated in the model. Data consistent with FISH results are achieved. We demonstrate significant improvements over existing methods for exploring gene dosages and intratumor copy number heterogeneity in cancers. PMID:18500990

  12. Single-scatter vector-wave scattering from surfaces with infinite slopes using the Kirchhoff approximation.

    PubMed

    Bruce, Neil C

    2008-08-01

    This paper presents a new formulation of the 3D Kirchhoff approximation that allows calculation of the scattering of vector waves from 2D rough surfaces containing structures with infinite slopes. This type of surface has applications, for example, in remote sensing and in testing or imaging of printed circuits. Some preliminary calculations for rectangular-shaped grooves in a plane are presented for the 2D surface method and are compared with the equivalent 1D surface calculations for the Kirchhoff and integral equation methods. Good agreement is found between the methods.

  13. The scalar and electromagnetic form factors of the nucleon in dispersively improved Chiral EFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon, Jose Manuel

    We present a method for calculating the nucleon form factors of G-parity-even operators. This method combines chiral effective field theory (χEFT) and dispersion theory. Through unitarity we factorize the imaginary part of the form factors into a perturbative part, calculable with χEFT, and a non-perturbative part, obtained through other methods. We consider the scalar and electromagnetic (EM) form factors of the nucleon. The results show an important improvement compared to standard chiral calculations, and can be used in analysis of the low-energy properties of the nucleon.

  14. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  15. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  16. UK audit of glomerular filtration rate measurement from plasma sampling in 2013.

    PubMed

    Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn

    2014-11-01

    An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.

  17. Calculation of reaction forces in the boiler supports using the method of equivalent stiffness of membrane wall.

    PubMed

    Sertić, Josip; Kozak, Dražan; Samardžić, Ivan

    2014-01-01

    The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of "Milano" boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized.

  18. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  19. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.

    1989-01-01

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.

  20. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  1. Calculation of effective plutonium cross sections and check against the oscillation experiment CESAR-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaal, H.; Bernnat, W.

    1987-10-01

    For calculations of high-temperature gas-cooled reactors with low-enrichment fuel, it is important to know the plutonium cross sections accurately. Therefore, a calculational method was developed, by which the plutonium cross-section data of the ENDF/B-IV library can be examined. This method uses zero- and one-dimensional neutron transport calculations to collapse the basic data into one-group cross sections, which then can be compared with experimental values obtained from integral tests. For comparison the data from the critical experiment CESAR-II of the Centre d'Etudes Nucleaires, Cadarache, France, were utilized.

  2. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  3. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  4. Estimation of Metal Acceleration by an SF5 Containing Explosive

    DTIC Science & Technology

    1991-06-30

    cylinder wall acceleration, the wall energies for the baseline composition and for the composition CW2 were calculated by three different methods : KSM, 6...computaticno! method to calculate the cylinder energies needed for comparison with expermental data, it was decided to compare data from three known methods ...detonation pressure given for the GAB method in Reference 8. They are E(6mm) = 0.1272 (Isp X density)1 .5 - 0.021, and E(l9mm) = 0.1580 (Isp x density) 1 .5

  5. KLL dielectronic recombination resonant strengths of He-like up to O-like xenon ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, K.; Geng, Z.; Xiao, J.

    2010-02-15

    In this work, the KLL dielectronic recombination (DR) resonant strengths of He- through to O-like Xe ions were studied, both through experiment and calculation. The experiments were done using a fast electron beam-energy scanning technique at the Shanghai electron beam ion trap. The calculations were done by using the flexible atomic code (FAC), in which the relativistic configuration interaction (RCI) method was employed. For the total resonant strengths, the present experimental and theoretical results for He-, Be-, B-, C-, N-, and O-like Xe ions agree within experimental uncertainties (about 9%). But the experimental result for Li-like Xe is 14% highermore » than the calculation. The present FAC calculations of the total DR strengths were compared with the available previous calculations, using RCI or multiconfiguration Dirac-Fock (MCDF) methods, and the agreement was very good. In this work, some intermediate-state resolved KLL DR strengths were also obtained and compared with theoretical results, and more discrepancies were revealed.« less

  6. The atmospheric emission method of calculating the neutral atmosphere and charged particle densities in the upper atmosphere

    NASA Astrophysics Data System (ADS)

    McElroy, Kenneth L., Jr.

    1992-12-01

    A method is presented for the determination of neutral gas densities in the ionosphere from rocket-borne measurements of UV atmospheric emissions. Computer models were used to calculate an initial guess for the neutral atmosphere. Using this neutral atmosphere, intensity profiles for the N2 (0,5) Vegard-Kaplan band, the N2 Lyman-Birge-Hopfield band system, and the OI2972 A line were calculated and compared with the March 1990 NPS MUSTANG data. The neutral atmospheric model was modified and the intensity profiles recalculated until a fit with the data was obtained. The neutral atmosphere corresponding to the intensity profile that fit the data was assumed to be the atmospheric composition prevailing at the time of the observation. The ion densities were then calculated from the neutral atmosphere using a photochemical model. The electron density profile calculated by this model was compared with the electron density profile measured by the U.S. Air Force Geophysics Laboratory at a nearby site.

  7. A revised and unified pressure-clamp/relaxation theory for studying plant cell water relations with pressure probes: in-situ determination of cell volume for calculation of volumetric elastic modulus and hydraulic conductivity.

    PubMed

    Knipfer, T; Fei, J; Gambetta, G A; Shackel, K A; Matthews, M A

    2014-10-21

    The cell-pressure-probe is a unique tool to study plant water relations in-situ. Inaccuracy in the estimation of cell volume (νo) is the major source of error in the calculation of both cell volumetric elastic modulus (ε) and cell hydraulic conductivity (Lp). Estimates of νo and Lp can be obtained with the pressure-clamp (PC) and pressure-relaxation (PR) methods. In theory, both methods should result in comparable νo and Lp estimates, but this has not been the case. In this study, the existing νo-theories for PC and PR methods were reviewed and clarified. A revised νo-theory was developed that is equally valid for the PC and PR methods. The revised theory was used to determine νo for two extreme scenarios of solute mixing between the experimental cell and sap in the pressure probe microcapillary. Using a fully automated cell-pressure-probe (ACPP) on leaf epidermal cells of Tradescantia virginiana, the validity of the revised theory was tested with experimental data. Calculated νo values from both methods were in the range of optically determined νo (=1.1-5.0nL) for T. virginiana. However, the PC method produced a systematically lower (21%) calculated νo compared to the PR method. Effects of solute mixing could only explain a potential error in calculated νo of <3%. For both methods, this discrepancy in νo was almost identical to the discrepancy in the measured ratio of ΔV/ΔP (total change in microcapillary sap volume versus corresponding change in cell turgor) of 19%, which is a fundamental parameter in calculating νo. It followed from the revised theory that the ratio of ΔV/ΔP was inversely related to the solute reflection coefficient. This highlighted that treating the experimental cell as an ideal osmometer in both methods is potentially not correct. Effects of non-ideal osmotic behavior by transmembrane solute movement may be minimized in the PR as compared to the PC method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  9. Tautomerism and spectroscopic properties of the immunosuppressant azathioprine.

    PubMed

    Makhyoun, Mohamed A; Massoud, Raghdaa A; Soliman, Saied M

    2013-10-01

    The molecular structure and the relative stabilities of the four possible tautomers of the immunosuppressant azathioprine (AZA) are calculated by DFT/B3LYP method using different basis sets. The results of the energy analysis and thermodynamic treatment of the obtained data are used to predict the relative stabilities of the AZA tautomers. The effect of solvents such as DMSO and water on the stability of the AZA tautomers was studied using the polarized continuum method (PCM) at the same level of theory. The calculation predicted that, the total energies of all tautomers are decreased indicating that all tautomers are more or less stabilized by the solvent effect. The vibrational spectra of AZA are calculated using the same level of theory and the results are compared with the experimentally measured FTIR spectra. Good correlation is obtained between the experimental and calculated vibrational frequencies (R(2)=0.997). The electronic spectra of AZA in gas phase and in methanol as solvent are calculated using the TD-DFT method. The calculations predicted bathochromic shift in all the spectral bands in presence of solvent compared to the gas phase. Also the NMR spectra of all tautomers are calculated and the results are correlated with the experimental NMR chemical shifts where the most stable tautomer gives the best correlation coefficient (R(2)=0.996). Copyright © 2013 Elsevier B.V. All rights reserved.

  10. DFT simulations and vibrational spectra of 2-amino-2-methyl-1,3-propanediol

    NASA Astrophysics Data System (ADS)

    Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.

    2014-12-01

    The FTIR and FT-Raman spectra of 2-amino-2-methyl-1,3-propanediol were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and density functional method (B3LYP) with the augmented-correlation consistent-polarized valence double zeta (aug-cc-pVDZ) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed on the basis of the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Mulliken charges were calculated using both Hartee-Fock and density functional method using the aug-cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge-Independent Atomic Orbital (GIAO) method and were compared with experimental results.

  11. A Method for Calculating Viscosity and Thermal Conductivity of a Helium-Xenon Gas Mixture

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2006-01-01

    A method for calculating viscosity and thermal conductivity of a helium-xenon (He-Xe) gas mixture was employed, and results were compared to AiResearch (part of Honeywell) analytical data. The method of choice was that presented by Hirschfelder with Singh's third-order correction factor applied to thermal conductivity. Values for viscosity and thermal conductivity were calculated over a temperature range of 400 to 1200 K for He-Xe gas mixture molecular weights of 20.183, 39.94, and 83.8 kg/kmol. First-order values for both transport properties were in good agreement with AiResearch analytical data. Third-order-corrected thermal conductivity values were all greater than AiResearch data, but were considered to be a better approximation of thermal conductivity because higher-order effects of mass and temperature were taken into consideration. Viscosity, conductivity, and Prandtl number were then compared to experimental data presented by Taylor.

  12. The band gap properties of the three-component semi-infinite plate-like LRPC by using PWE/FE method

    NASA Astrophysics Data System (ADS)

    Qian, Denghui; Wang, Jianchun

    2018-06-01

    This paper applies coupled plane wave expansion and finite element (PWE/FE) method to calculate the band structure of the proposed three-component semi-infinite plate-like locally resonant phononic crystal (LRPC). In order to verify the accuracy of the result, the band structure calculated by PWE/FE method is compared to that calculated by the traditional finite element (FE) method, and the frequency range of the band gap in the band structure is compared to that of the attenuation in the transmission power spectrum. Numerical results and further analysis demonstrate that a band gap is opened by the coupling between the dominant vibrations of the rubber layer and the matrix modes. In addition, the influences of the geometry parameters on the band gap are studied and understood with the help of the simple “base-spring-mass” model, the influence of the viscidity of rubber layer on the band gap is also investigated.

  13. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  14. Development and validation of a FIA/UV-vis method for pK(a) determination of oxime based acetylcholinesterase reactivators.

    PubMed

    Musil, Karel; Florianova, Veronika; Bucek, Pavel; Dohnal, Vlastimil; Kuca, Kamil; Musilek, Kamil

    2016-01-05

    Acetylcholinesterase reactivators (oximes) are compounds used for antidotal treatment in case of organophosphorus poisoning. The dissociation constants (pK(a1)) of ten standard or promising acetylcholinesterase reactivators were determined by ultraviolet absorption spectrometry. Two methods of spectra measurement (UV-vis spectrometry, FIA/UV-vis) were applied and compared. The soft and hard models for calculation of pK(a1) values were performed. The pK(a1) values were recommended in the range 7.00-8.35, where at least 10% of oximate anion is available for organophosphate reactivation. All tested oximes were found to have pK(a1) in this range. The FIA/UV-vis method provided rapid sample throughput, low sample consumption, high sensitivity and precision compared to standard UV-vis method. The hard calculation model was proposed as more accurate for pK(a1) calculation. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Calculating p-values and their significances with the Energy Test for large datasets

    NASA Astrophysics Data System (ADS)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  16. Use of petroleum-based correlations and estimation methods for synthetic fuels

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1980-01-01

    Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.

  17. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  18. A simple performance calculation method for LH2/LOX engines with different power cycles

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1973-01-01

    A simple method for the calculation of the specific impulse of an engine with a gas generator cycle is presented. The solution is obtained by a power balance between turbine and pump. Approximate equations for the performance of the combustion products of LH2/LOX are derived. Performance results are compared with solutions of different engine types.

  19. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  20. Radiative Heating Methodology for the Huygens Probe

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth

    2007-01-01

    The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.

  1. Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact

    PubMed Central

    Leach, Allison M.; Compton, Jana E.; Galloway, James N.; Andrews, Jennifer

    2017-01-01

    Abstract When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate nitrogen footprints using the Nitrogen Footprint Tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This article compares those seven institutions’ results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, amount of food served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The comparisons also pointed to differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. PMID:29350218

  2. Comparing Institution Nitrogen Footprints: Metrics for ...

    EPA Pesticide Factsheets

    When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This paper compares the results of those seven results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, number of meals served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The results also found differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors that are both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. This paper is being submitt

  3. Evaluation of lung and chest wall mechanics during anaesthesia using the PEEP-step method.

    PubMed

    Persson, P; Stenqvist, O; Lundin, S

    2018-04-01

    Postoperative pulmonary complications are common. Between patients there are differences in lung and chest wall mechanics. Individualised mechanical ventilation based on measurement of transpulmonary pressures would be a step forward. A previously described method evaluates lung and chest wall mechanics from a change of ΔPEEP and calculation of change in end-expiratory lung volume (ΔEELV). The aim of the present study was to validate this PEEP-step method (PSM) during general anaesthesia by comparing it with the conventional method using oesophageal pressure (PES) measurements. In 24 lung healthy subjects (BMI 18.5-32), three different sizes of PEEP steps were performed during general anaesthesia and ΔEELVs were calculated. Transpulmonary driving pressure (ΔPL) for a tidal volume equal to each ΔEELV was measured using PES measurements and compared to ΔPEEP with limits of agreement and intraclass correlation coefficients (ICC). ΔPL calculated with both methods was compared with a Bland-Altman plot. Mean differences between ΔPEEP and ΔPL were <0.15 cm H 2 O, 95% limits of agreements -2.1 to 2.0 cm H 2 O, ICC 0.6-0.83. Mean differences between ΔPL calculated by both methods were <0.2 cm H 2 O. Ratio of lung elastance and respiratory system elastance was 0.5-0.95. The large variation in mechanical properties among the lung healthy patients stresses the need for individualised ventilator settings based on measurements of lung and chest wall mechanics. The agreement between ΔPLs measured by the two methods during general anaesthesia suggests the use of the non-invasive PSM in this patient population. NCT 02830516. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Level Density in the Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-06-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L(2) basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM.

  5. Electronic structure of the Cu + impurity center in sodium chloride

    NASA Astrophysics Data System (ADS)

    Chermette, H.; Pedrini, C.

    1981-08-01

    The multiple-scattering Xα method is used to describe the electronic structure of Cu+ in sodium chloride. Several improvements are brought to the conventional Xα calculation. In particular, the cluster approximation is used by taking into account external lattice potential. The ''transition state'' procedure is applied in order to get the various multiplet levels. The fine electronic structure of the impurity centers is obtained after a calculation of the spin-orbit interactions. These results are compared with those given by a modified charge-consistent extended Hückel method (Fenske-type calculation) and the merit of each method is discussed. The present calculation produces good quantitative agreement with experiment concerning mainly the optical excitations and the emission mechanism of the Cu+ luminescent centers in NaCl.

  6. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  7. A method for including external feed in depletion calculations with CRAM and implementation into ORIGEN

    DOE PAGES

    Isotalo, Aarno E.; Wieselquist, William A.

    2015-05-15

    A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less

  8. Coupled-rearrangement-channels calculation of the three-body system under the absorbing boundary condition

    NASA Astrophysics Data System (ADS)

    Iwasaki, M.; Otani, R.; Ito, M.; Kamimura, M.

    2016-06-01

    We formulate the absorbing boundary condition (ABC) in the coupled rearrangement-channels variational method (CRCVM) for the three-body problem. The absorbing potential is introduced in the system of the identical three-bosons, on which the boson symmetry is explicitly imposed by considering the rearrangement channels. The resonance parameters and the strength of the monopole breakup are calculated by the CRCVM + ABC method, and the results are compared with the complex scaling method (CSM). We have found that the results of the ABC method are consistent with the CSM results. The effect of the boson symmetry, which is often neglected in the calculation of the triple α reactions, is also discussed.

  9. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  10. Nursing students' mathematic calculation skills.

    PubMed

    Rainboth, Lynde; DeMasi, Chris

    2006-12-01

    This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.

  11. Look Before You Leap: What Are the Obstacles to Risk Calculation in the Equestrian Sport of Eventing?

    PubMed Central

    O’Brien, Denzil

    2016-01-01

    Simple Summary This paper examines a number of methods for calculating injury risk for riders in the equestrian sport of eventing, and suggests that the primary locus of risk is the action of the horse jumping, and the jump itself. The paper argues that risk calculation should therefore focus first on this locus. Abstract All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps. PMID:26891334

  12. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  13. Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold

    2017-03-01

    Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.

  14. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  15. Instanton rate constant calculations close to and above the crossover temperature.

    PubMed

    McConnell, Sean; Kästner, Johannes

    2017-11-15

    Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2  + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Method to determine the optimal constitutive model from spherical indentation tests

    NASA Astrophysics Data System (ADS)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  17. Structure and vibrational spectra of melaminium bis(trifluoroacetate) trihydrate: FT-IR, FT-Raman and quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Sangeetha, V.; Govindarajan, M.; Kanagathara, N.; Marchewka, M. K.; Gunasekaran, S.; Anbalagan, G.

    Melaminium bis(trifluoroacetate) trihydrate (MTFA), an organic material has been synthesized and single crystals of MTFA have been grown by the slow solvent evaporation method at room temperature. X-ray powder diffraction analysis confirms that MTFA crystal belongs to the monoclinic system with space group P2/c. The molecular geometry, vibrational frequencies and intensity of the vibrational bands have been interpreted with the aid of structure optimization based on density functional theory (DFT) B3LYP method with 6-311G(d,p) and 6-311++G(d,p) basis sets. The X-ray diffraction data have been compared with the data of optimized molecular structure. The theoretical results show that the crystal structure can be reproduced by optimized geometry and the vibrational frequencies show good agreement with the experimental values. The nuclear magnetic resonance (NMR) chemical shift of the molecule has been calculated by the gauge independent atomic orbital (GIAO) method and compared with experimental results. HOMO-LUMO, and other related molecular and electronic properties are calculated. The Mulliken and NBO charges have also been calculated and interpreted.

  18. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  19. A shortest-path graph kernel for estimating gene product semantic similarity.

    PubMed

    Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui

    2011-07-29

    Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.

  20. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  1. Approximate methods in gamma-ray skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faw, R.E.; Roseberry, M.L.; Shultis, J.K.

    1985-11-01

    Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.

  2. Comparison Of Reaction Barriers In Energy And Free Energy For Enzyme Catalysis

    NASA Astrophysics Data System (ADS)

    Andrés Cisneros, G.; Yang, Weitao

    Reaction paths on potential energy surfaces obtained from QM/MM calculations of enzymatic or solution reactions depend on the starting structure employed for the path calculations. The free energies associated with these paths should be more reliable for studying reaction mechanisms, because statistical averages are used. To investigate this, the role of enzyme environment fluctuations on reaction paths has been studied with an ab initio QM/MM method for the first step of the reaction catalyzed by 4-oxalocrotonate tautomerase (4OT). Four minimum energy paths (MEPs) are compared, which have been determined with two different methods. The first path (path A) has been determined with a procedure that combines the nudged elastic band (NEB) method and a second order parallel path optimizer recently developed in our group. The second path (path B) has also been determined by the combined procedure, however, the enzyme environment has been relaxed by molecular dynamics (MD) simulations. The third path (path C) has been determined with the coordinate driving (CD) method, using the enzyme environment from path B. We compare these three paths to a previously determined path (path D) determined with the CD method. In all four cases the QM/MM-FE method (Y. Zhang et al., JCP, 112, 3483) was employed to obtain the free energy barriers for all four paths. In the case of the combined procedure, the reaction path is approximated by a small number of images which are optimized to the MEP in parallel, which results in a reduced computational cost. However, this does not allow the FEP calculation on the MEP. In order to perform FEP calculations on these paths, we introduce a modification to the NEB method that enables the addition of as many extra images to the path as needed for the FEP calculations. The calculated potential energy barriers show differences in the activation barrier between the calculated paths of as much as 5.17 kcal/mol. However, the largest free energy barrier difference is 1.58 kcal/mol. These results show the importance of the inclusion of the environment fluctuation in the calculation of enzymatic activation barriers

  3. FT-IR, FT-Raman, NMR spectra, density functional computations of the vibrational assignments (for monomer and dimer) and molecular geometry of anticancer drug 7-amino-2-methylchromone

    NASA Astrophysics Data System (ADS)

    Mariappan, G.; Sundaraganesan, N.

    2014-04-01

    Vibrational assignments for the 7-amino-2-methylchromone (abbreviated as 7A2MC) molecule using a combination of experimental vibrational spectroscopic measurements and ab initio computational methods are reported. The optimized geometry, intermolecular hydrogen bonding, first order hyperpolarizability and harmonic vibrational wavenumbers of 7A2MC have been investigated with the help of B3LYP density functional theory method. The calculated molecular geometry parameters, the theoretically computed vibrational frequencies for monomer and dimer and relative peak intensities were compared with experimental data. DFT calculations using the B3LYP method and 6-31 + G(d,p) basis set were found to yield results that are very comparable to experimental IR and Raman spectra. Detailed vibrational assignments were performed with DFT calculations and the potential energy distribution (PED) obtained from the Vibrational Energy Distribution Analysis (VEDA) program. Natural Bond Orbital (NBO) study revealed the characteristics of the electronic delocalization of the molecular structure. 13C and 1H NMR spectra have been recorded and 13C and 1H nuclear magnetic resonance chemical shifts of the molecule have been calculated using the gauge independent atomic orbital (GIAO) method. Furthermore, All the possible calculated values are analyzed using correlation coefficients linear fitting equation and are shown strong correlation with the experimental data.

  4. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  5. Wind turbine sound pressure level calculations at dwellings.

    PubMed

    Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits

    2016-03-01

    This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.

  6. Methanol in its own gravy. A PCM study for simulation of vibrational spectra.

    PubMed

    Billes, Ferenc; Mohammed-Ziegler, Ildikó; Mikosch, Hans

    2011-05-07

    For studying both hydrogen bond and dipole-dipole interactions between methanol molecules (self-association) the geometry of clusters of increasing numbers of methanol molecules (n = 1,2,3) were optimized and also their vibrational frequencies were calculated with quantum chemical methods. Beside these B3LYP/6-311G** calculations, PCM calculations were also done for all systems with PCM at the same quantum chemical method and basis set, for considering the effect of the liquid continuum on the cluster properties. Comparing the results, the measured and calculated infrared spectra are in good accordance. This journal is © the Owner Societies 2011

  7. Theoretical study on the dissociation energies, ionization potentials and electron affinities of three perfluoroalkyl iodides

    NASA Astrophysics Data System (ADS)

    Cheng, Li; Shen, Zuochun; Lu, Jianye; Gao, Huide; Lü, Zhiwei

    2005-11-01

    Dissociation energies, ionization potentials and electron affinities of three perfluoroalkyl iodides, CF 3I, C 2F 5I, and i-C 3F 7I are calculated accurately with B3LYP, MP n ( n = 2-4), QCISD, QCISD(T), CCSD, and CCSD(T) methods. Calculations are performed by using large-core correlation-consistent pseudopotential basis set (SDB-aug-cc-pVTZ) for iodine atom. In all energy calculations, the zero point vibration energy is corrected. And the basis set superposition error is corrected by counterpoise method in the calculation of dissociation energy. Theoretical results are compared with the experimental values.

  8. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  9. Accuracy of contacts calculated from 3D images of occlusal surfaces.

    PubMed

    DeLong, R; Knorr, S; Anderson, G C; Hodges, J; Pintado, M R

    2007-06-01

    Compare occlusal contacts calculated from 3D virtual models created from clinical records to contacts identified clinically using shimstock and transillumination. Upper and lower full arch alginate impressions and vinyl polysiloxane centric interocclusal records were made of 12 subjects. Stone casts made from the alginate impressions and the interocclusal records were optically scanned. Three-dimensional virtual models of the dental arches and interocclusal records were constructed using the Virtual Dental Patient Software. Contacts calculated from the virtual interocclusal records and from the aligned upper and lower virtual arch models were compared to those identified clinically using 0.01mm shimstock and transillumination of the interocclusal record. Virtual contacts and transillumination contacts were compared by anatomical region and by contacting tooth pairs to shimstock contacts. Because there is no accepted standard for identifying occlusal contacts, methods were compared in pairs with one labeled "standard" and the second labeled "test". Accuracy was defined as the number of contacts and non-contacts of the "test" that were in agreement with the "standard" divided by the total number of contacts and non-contacts of the "standard". Accuracy of occlusal contacts calculated from virtual interocclusal records and aligned virtual casts compared to transillumination were: 0.87+/-0.05 and 0.84+/-0.06 by region and 0.95+/-0.07 and 0.95+/-0.05 by tooth, respectively. Comparisons with shimstock were: 0.85+/-0.15 (record), 0.84+/-0.14 (casts), and 81+/-17 (transillumination). The virtual record, aligned virtual arches, and transillumination methods of identifying contacts are equivalent, and show better agreement with each other than with the shimstock method.

  10. Comparison of the Effects of Cooperative Learning and Traditional Learning Methods on the Improvement of Drug-Dose Calculation Skills of Nursing Students Undergoing Internships

    ERIC Educational Resources Information Center

    Basak, Tulay; Yildiz, Dilek

    2014-01-01

    Objective: The aim of this study was to compare the effectiveness of cooperative learning and traditional learning methods on the development of drug-calculation skills. Design: Final-year nursing students ("n" = 85) undergoing internships during the 2010-2011 academic year at a nursing school constituted the study group of this…

  11. WE-E-18A-03: How Accurately Can the Peak Skin Dose in Fluoroscopy Be Determined Using Indirect Dose Metrics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A; Pasciak, A

    Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less

  12. Electronic Structure Calculation of Permanent Magnets using the KKR Green's Function Method

    NASA Astrophysics Data System (ADS)

    Doi, Shotaro; Akai, Hisazumi

    2014-03-01

    Electronic structure and magnetic properties of permanent magnetic materials, especially Nd2Fe14B, are investigated theoretically using the KKR Green's function method. Important physical quantities in magnetism, such as magnetic moment, Curie temperature, and anisotropy constant, which are obtained from electronics structure calculations in both cases of atomic-sphere-approximation and full-potential treatment, are compared with past band structure calculations and experiments. The site preference of heavy rare-earth impurities are also evaluated through the calculation of formation energy with the use of coherent potential approximations. Further, the development of electronic structure calculation code using the screened KKR for large super-cells, which is aimed at studying the electronic structure of realistic microstructures (e.g. grain boundary phase), is introduced with some test calculations.

  13. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  14. Time-independent lattice Boltzmann method calculation of hydrodynamic interactions between two particles

    NASA Astrophysics Data System (ADS)

    Ding, E. J.

    2015-06-01

    The time-independent lattice Boltzmann algorithm (TILBA) is developed to calculate the hydrodynamic interactions between two particles in a Stokes flow. The TILBA is distinguished from the traditional lattice Boltzmann method in that a background matrix (BGM) is generated prior to the calculation. The BGM, once prepared, can be reused for calculations for different scenarios, and the computational cost for each such calculation will be significantly reduced. The advantage of the TILBA is that it is easy to code and can be applied to any particle shape without complicated implementation, and the computational cost is independent of the shape of the particle. The TILBA is validated and shown to be accurate by comparing calculation results obtained from the TILBA to analytical or numerical solutions for certain problems.

  15. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  16. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  17. Methods for calculating dietary energy density in a nationally representative sample

    PubMed Central

    Vernarelli, Jacqueline A.; Mitchell, Diane C.; Rolls, Barbara J.; Hartman, Terryl J.

    2013-01-01

    There has been a growing interest in examining dietary energy density (ED, kcal/g) as it relates to various health outcomes. Consuming a diet low in ED has been recommended in the 2010 Dietary Guidelines, as well as by other agencies, as a dietary approach for disease prevention. Translating this recommendation into practice; however, is difficult. Currently there is no standardized method for calculating dietary ED; as dietary ED can be calculated with foods alone, or with a combination of foods and beverages. Certain items may be defined as either a food or a beverage (e.g., meal replacement shakes) and require special attention. National survey data are an excellent resource for evaluating factors that are important to dietary ED calculation. The National Health and Nutrition Examination Survey (NHANES) nutrient and food database does not include an ED variable, thus researchers must independently calculate ED. The objective of this study was to provide information that will inform the selection of a standardized ED calculation method by comparing and contrasting methods for ED calculation. The present study evaluates all consumed items and defines foods and beverages based on both USDA food codes and how the item was consumed. Results are presented as mean EDs for the different calculation methods stratified by population demographics (e.g. age, sex). Using United State Department of Agriculture (USDA) food codes in the 2005–2008 NHANES, a standardized method for calculating dietary ED can be derived. This method can then be adapted by other researchers for consistency across studies. PMID:24432201

  18. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, H.B.; Einerson, C.J.; Watkins, A.D.

    1987-08-10

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.

  19. Studies on transonic Double Circular Arc (DCA) profiles of axial flow compressor calculations of profile design

    NASA Astrophysics Data System (ADS)

    Rugun, Y.; Zhaoyan, Q.

    1986-05-01

    In this paper, the concepts and methods for design of high-Mach-number airfoils of axial flow compressor are described. The correlation-equations of main parameters such as geometries of airfoil and cascade, stream parameters and wake characteristic parameters of compressor are provided. For obtaining the total pressure loss coefficients of cascade and adopting the simplified calculating method, several curves and charts are provided by authors. The testing results and calculating values are compared, and both the results are in better agreement.

  20. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  1. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  2. Calculation of Reaction Forces in the Boiler Supports Using the Method of Equivalent Stiffness of Membrane Wall

    PubMed Central

    Sertić, Josip; Kozak, Dražan; Samardžić, Ivan

    2014-01-01

    The values of reaction forces in the boiler supports are the basis for the dimensioning of bearing steel structure of steam boiler. In this paper, the application of the method of equivalent stiffness of membrane wall is proposed for the calculation of reaction forces. The method of equalizing displacement, as the method of homogenization of membrane wall stiffness, was applied. On the example of “Milano” boiler, using the finite element method, the calculation of reactions in the supports for the real geometry discretized by the shell finite element was made. The second calculation was performed with the assumption of ideal stiffness of membrane walls and the third using the method of equivalent stiffness of membrane wall. In the third case, the membrane walls are approximated by the equivalent orthotropic plate. The approximation of membrane wall stiffness is achieved using the elasticity matrix of equivalent orthotropic plate at the level of finite element. The obtained results were compared, and the advantages of using the method of equivalent stiffness of membrane wall for the calculation of reactions in the boiler supports were emphasized. PMID:24959612

  3. Development of virtual patient models for permanent implant brachytherapy Monte Carlo dose calculations: interdependence of CT image artifact mitigation and tissue assignment.

    PubMed

    Miksys, N; Xu, C; Beaulieu, L; Thomson, R M

    2015-08-07

    This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.

  4. Hybrid transfer-matrix FDTD method for layered periodic structures.

    PubMed

    Deinega, Alexei; Belousov, Sergei; Valuev, Ilya

    2009-03-15

    A hybrid transfer-matrix finite-difference time-domain (FDTD) method is proposed for modeling the optical properties of finite-width planar periodic structures. This method can also be applied for calculation of the photonic bands in infinite photonic crystals. We describe the procedure of evaluating the transfer-matrix elements by a special numerical FDTD simulation. The accuracy of the new method is tested by comparing computed transmission spectra of a 32-layered photonic crystal composed of spherical or ellipsoidal scatterers with the results of direct FDTD and layer-multiple-scattering calculations.

  5. D-Wave Electron-H, -He+, and -Li2+ Elastic Scattering and Photoabsorption in P States of Two-Electron Systems

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2014-01-01

    In previous papers [A. K. Bhatia, Phys. Rev. A 85, 052708 (2012); 86, 032709 (2012); 87, 042705 (2013)] electron-H, -He+, and -Li2+ P-wave scattering phase shifts were calculated using the variational polarized orbital theory. This method is now extended to the singlet and triplet D-wave scattering in the elastic region. The long-range correlations are included in the Schrodinger equation by using the method of polarized orbitals variationally. Phase shifts are compared to those obtained by other methods. The present calculation provides results which are rigorous lower bonds to the exact phase shifts. Using the presently calculated D-wave and previously calculated S-wave continuum functions, photoionization of singlet and triplet P states of He and Li+ are also calculated, along with the radiative recombination rate coefficients at various electron temperatures.

  6. Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel

    NASA Astrophysics Data System (ADS)

    Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa

    This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.

  7. Calculation of AC loss in two-layer superconducting cable with equal currents in the layers

    NASA Astrophysics Data System (ADS)

    Erdogan, Muzaffer

    2016-12-01

    A new method for calculating AC loss of two-layer SC power transmission cables using the commercial software Comsol Multiphysics, relying on the approach of the equal partition of current between the layers is proposed. Applying the method to calculate the AC-loss in a cable composed of two coaxial cylindrical SC tubes, the results are in good agreement with the analytical ones of duoblock model. Applying the method to calculate the AC-losses of a cable composed of a cylindrical copper former, surrounded by two coaxial cylindrical layers of superconducting tapes embedded in an insulating medium with tape-on-tape and tape-on-gap configurations are compared. A good agreement between the duoblock model and the numerical results for the tape-on-gap cable is observed.

  8. Tight-binding approximations to time-dependent density functional theory — A fast approach for the calculation of electronically excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rüger, Robert, E-mail: rueger@scm.com; Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam; Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig

    2016-05-14

    We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of twomore » compared to TD-DFTB.« less

  9. Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianwei; Zhang, Yong

    2014-12-07

    We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less

  10. An indirect approach to the extensive calculation of relationship coefficients

    PubMed Central

    Colleau, Jean-Jacques

    2002-01-01

    A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102

  11. Controlled study on the cognitive and psychological effect of coloring and drawing in mild Alzheimer's disease patients.

    PubMed

    Hattori, Hideyuki; Hattori, Chikako; Hokao, Chieko; Mizushima, Kumiko; Mase, Toru

    2011-10-01

    Art therapy has been reported to have effects on mental symptoms in patients with dementia, and its usefulness is expected. We performed a controlled trial to evaluate the usefulness of art therapy compared with calculation training in patients with mild Alzheimer's disease.   Thirty-nine patients with Alzheimer's disease showing slightly decreased cognitive function allowing treatment on an outpatient basis were randomly allocated to art therapy and control (learning therapy using calculation) groups, and intervention was performed once weekly for 12weeks.   Comparison of the results of evaluation between before and after therapy in each group showed significant improvement in the Apathy Scale in the art therapy group (P=0.014) and in the Mini-Mental State Examination score (P=0.015) in the calculation drill group, but no significant differences in the other items between the two groups. Patients showing a 10% or greater improvement were compared between the two groups. Significant improvement in the quality of life (QOL) was observed in the art therapy compared with the calculation training group (P=0.038, odds ratio, 5.54). anova concerning improvement after each method revealed no significant difference in any item. These results suggested improvement in at least the vitality and the QOL of patients with mild Alzheimer's disease after art therapy compared with calculation, but no marked comprehensive differences between the two methods. In non-pharmacological therapy for dementia, studies attaching importance to the motivation and satisfaction of patients and their family members rather than the superiority of methods may be necessary in the future. © 2011 Japan Geriatrics Society.

  12. Molecular structure and the EPR calculation of the gas phase succinonitrile molecule

    NASA Astrophysics Data System (ADS)

    Kepceoǧlu, A.; Kılıç, H. Ş.; Dereli, Ö.

    2017-02-01

    Succinonitrile (i.e. butanedinitrile) is a colorless nitrile compound that can be used in the gel polymer batteries as a solid-state solvent electrolytes and has a plastic crystal structure. Prior to the molecular structure calculation of the succinonitrile molecule, the conformer analysis were calculated by using semi empirical method PM3 core type Hamiltonian and eight different conformer structures were determined. Molecular structure with energy related properties of these conformers having the lowest energy was calculated by using DFT (B3LYP) methods with 6-311++G(d,p) basis set. Possible radicals, can be formed experimentally, were modeled in this study. EPR parameters of these model radicals were calculated and then compared with that obtained experimentally.

  13. Assessment of formulas for calculating critical concentration by the agar diffusion method.

    PubMed Central

    Drugeon, H B; Juvin, M E; Caillon, J; Courtieu, A L

    1987-01-01

    The critical concentration of antibiotic was calculated by using the agar diffusion method with disks containing different charges of antibiotic. It is currently possible to use different calculation formulas (based on Fick's law) devised by Cooper and Woodman (the best known) and by Vesterdal. The results obtained with the formulas were compared with the MIC results (obtained by the agar dilution method). A total of 91 strains and two cephalosporins (cefotaxime and ceftriaxone) were studied. The formula of Cooper and Woodman led to critical concentrations that were higher than the MIC, but concentrations obtained with the Vesterdal formula were closer to the MIC. The critical concentration was independent of method parameters (dilution, for example). PMID:3619419

  14. Skyshine analysis using energy and angular dependent dose-contribution fluxes obtained from air-over-ground adjoint calculation.

    PubMed

    Uematsu, Mikio; Kurosawa, Masahiko

    2005-01-01

    A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities.

  15. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  16. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  17. An Adaptive Nonlinear Basal-Bolus Calculator for Patients With Type 1 Diabetes

    PubMed Central

    Boiroux, Dimitri; Aradóttir, Tinna Björk; Nørgaard, Kirsten; Poulsen, Niels Kjølstad; Madsen, Henrik; Jørgensen, John Bagterp

    2016-01-01

    Background: Bolus calculators help patients with type 1 diabetes to mitigate the effect of meals on their blood glucose by administering a large amount of insulin at mealtime. Intraindividual changes in patients physiology and nonlinearity in insulin-glucose dynamics pose a challenge to the accuracy of such calculators. Method: We propose a method based on a continuous-discrete unscented Kalman filter to continuously track the postprandial glucose dynamics and the insulin sensitivity. We augment the Medtronic Virtual Patient (MVP) model to simulate noise-corrupted data from a continuous glucose monitor (CGM). The basal rate is determined by calculating the steady state of the model and is adjusted once a day before breakfast. The bolus size is determined by optimizing the postprandial glucose values based on an estimate of the insulin sensitivity and states, as well as the announced meal size. Following meal announcements, the meal compartment and the meal time constant are estimated, otherwise insulin sensitivity is estimated. Results: We compare the performance of a conventional linear bolus calculator with the proposed bolus calculator. The proposed basal-bolus calculator significantly improves the time spent in glucose target (P < .01) compared to the conventional bolus calculator. Conclusion: An adaptive nonlinear basal-bolus calculator can efficiently compensate for physiological changes. Further clinical studies will be needed to validate the results. PMID:27613658

  18. An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks

    NASA Astrophysics Data System (ADS)

    Zhao, Peng-yuan; Huang, Xiao-ping

    2018-03-01

    Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.

  19. DFT simulations and vibrational spectra of 2-amino-2-methyl-1,3-propanediol.

    PubMed

    Renuga Devi, T S; Sharmi kumar, J; Ramkumaar, G R

    2014-12-10

    The FTIR and FT-Raman spectra of 2-amino-2-methyl-1,3-propanediol were recorded in the regions 4000-400cm(-1) and 4000-50cm(-1) respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and density functional method (B3LYP) with the augmented-correlation consistent-polarized valence double zeta (aug-cc-pVDZ) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed on the basis of the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Mulliken charges were calculated using both Hartee-Fock and density functional method using the aug-cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. (1)H and (13)C NMR chemical shifts of the molecule were calculated using Gauge-Independent Atomic Orbital (GIAO) method and were compared with experimental results. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Numeric kinetic energy operators for molecules in polyspherical coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadri, Keyvan; Meyer, Hans-Dieter; Lauvergnat, David

    Generalized curvilinear coordinates, as, e.g., polyspherical coordinates, are in general better adapted to the resolution of the nuclear Schroedinger equation than rectilinear ones like the normal mode coordinates. However, analytical expressions of the kinetic energy operators (KEOs) for molecular systems in polyspherical coordinates may be prohibitively complicated for large systems. In this paper we propose a method to generate a KEO numerically and bring it to a form practicable for dynamical calculations. To examine the new method we calculated vibrational spectra and eigenenergies for nitrous acid (HONO) and compare it with results obtained with an exact analytical KEO derived previouslymore » [F. Richter, P. Rosmus, F. Gatti, and H.-D. Meyer, J. Chem. Phys. 120, 6072 (2004)]. In a second example we calculated {pi}{yields}{pi}* photoabsorption spectrum and eigenenergies of ethene (C{sub 2}H{sub 4}) and compared it with previous work [M. R. Brill, F. Gatti, D. Lauvergnat, and H.-D. Meyer, Chem. Phys. 338, 186 (2007)]. In this ethene study the dimensionality was reduced from 12 to 6 by freezing six internal coordinates. Results for both molecules show that the proposed method for obtaining an approximate KEO is reliable for dynamical calculations. The error in eigenenergies was found to be below 1 cm{sup -1} for most states calculated.« less

  1. Comparison of Newer IOL Power Calculation Methods for Eyes With Previous Radial Keratotomy

    PubMed Central

    Ma, Jack X.; Tang, Maolong; Wang, Li; Weikert, Mitchell P.; Huang, David; Koch, Douglas D.

    2016-01-01

    Purpose To evaluate the accuracy of the optical coherence tomography–based (OCT formula) and Barrett True K (True K) intraocular lens (IOL) calculation formulas in eyes with previous radial keratotomy (RK). Methods In 95 eyes of 65 patients, using the actual refraction following cataract surgery as target refraction, the predicted IOL power for each method was calculated. The IOL prediction error (PE) was obtained by subtracting the predicted IOL power from the implanted IOL power. The arithmetic IOL PE and median refractive PE were calculated and compared. Results All formulas except the True K produced hyperopic IOL PEs at 1 month, which decreased at ≥4 months (all P < 0.05). For the double-K Holladay 1, OCT formula, True K, and average of these three formulas (Average), the median absolute refractive PEs were, respectively, 0.78 diopters (D), 0.74 D, 0.60 D, and 0.59 D at 1 month; 0.69 D, 0.77 D, 0.77 D, and 0.61 D at 2 to 3 months; and 0.34 D, 0.65 D, 0.69 D, and 0.46 D at ≥4 months. The Average produced significantly smaller refractive PE than did the double-K Holladay 1 at 1 month (P < 0.05). There were no significant differences in refractive PEs among formulas at 4 months. Conclusions The OCT formula and True K were comparable to the double-K Holladay 1 method on the ASCRS (American Society of Cataract and Refractive Surgery) calculator. The Average IOL power on the ASCRS calculator may be considered when selecting the IOL power. Further improvements in the accuracy of IOL power calculation in RK eyes are desirable. PMID:27409468

  2. A new smoothing function to introduce long-range electrostatic effects in QM/MM calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Dong; Department of Chemistry, University of Wisconsin, Madison, Wisconsin 53706; Duke, Robert E.

    2015-07-28

    A new method to account for long range electrostatic contributions is proposed and implemented for quantum mechanics/molecular mechanics long range electrostatic correction (QM/MM-LREC) calculations. This method involves the use of the minimum image convention under periodic boundary conditions and a new smoothing function for energies and forces at the cutoff boundary for the Coulomb interactions. Compared to conventional QM/MM calculations without long-range electrostatic corrections, the new method effectively includes effects on the MM environment in the primary image from its replicas in the neighborhood. QM/MM-LREC offers three useful features including the avoidance of calculations in reciprocal space (k-space), with themore » concomitant avoidance of having to reproduce (analytically or approximately) the QM charge density in k-space, and the straightforward availability of analytical Hessians. The new method is tested and compared with results from smooth particle mesh Ewald (PME) for three systems including a box of neat water, a double proton transfer reaction, and the geometry optimization of the critical point structures for the rate limiting step of the DNA dealkylase AlkB. As with other smoothing or shifting functions, relatively large cutoffs are necessary to achieve comparable accuracy with PME. For the double-proton transfer reaction, the use of a 22 Å cutoff shows a close reaction energy profile and geometries of stationary structures with QM/MM-LREC compared to conventional QM/MM with no truncation. Geometry optimization of stationary structures for the hydrogen abstraction step by AlkB shows some differences between QM/MM-LREC and the conventional QM/MM. These differences underscore the necessity of the inclusion of the long-range electrostatic contribution.« less

  3. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  4. Thermal Cyclotron Absorption Coefficients. II. Opacities in the Stokes Formalism

    NASA Astrophysics Data System (ADS)

    Vaeth, H. M.; Chanmugam, G.

    1995-05-01

    We extend the discussion of the calculation of the cyclotron opacities α± of the ordinary and extraordinary mode (Chanmugam et al.) to the opacities κ, q, υ in the Stokes formalism. We derive formulae with which a can be calculated from κ, q, υ. We are hence able to compare our calculations of the opacities, which are based on the single-particle method, with results obtained with the dielectric tensor method of Tam or. Excellent agreement is achieved. We present extensive tables of the opacities in the Stokes formalism for frequencies up to 25ωc, where ωc is the cyclotron frequency, and temperatures kT = 5, 10,20, 30,40, and 50 keV. Furthermore, we derive approximate formulae with which κ, q, υ can be calculated from α± and hence use the Robinson & Melrose analytic formulae for α± in order to calculate the opacities in the Stokes formalism. We compare these opacities to accurate numerical opacities and find that the analytic formulae can reproduce the qualitative behavior of the opacities in the regions where the harmonic structure is unimportant.

  5. Semi-empirical calculations for the ranges of fast ions in silicon

    NASA Astrophysics Data System (ADS)

    Belkova, Yu. A.; Teplova, Ya. A.

    2018-04-01

    A semi-empirical method is proposed to calculate the ion ranges in energy region E = 0.025-10 MeV/nucleon. The dependence of ion ranges on the projectile nuclear charge, mass and velocity is analysed. The calculations presented for ranges of ions with nuclear charges Z = 2-10 in silicon are compared with SRIM results and experimental data.

  6. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  7. Structural and vibrational spectroscopic analysis of anticancer drug mitotane using DFT method; a comparative study of its parent structure

    NASA Astrophysics Data System (ADS)

    Mariappan, G.; Sundaraganesan, N.

    2015-04-01

    A comprehensive screening of the density functional theoretical approach to structural analysis is presented in this section. DFT calculations using B3LYP/6-311++G(d,p) level of theory were found to yield results that are very comparable to experimental IR and Raman spectra. Computed geometrical parameters and harmonic vibrational wavenumbers of the fundamentals were found in satisfactory agreement with the experimental data and also its parent structure. The vibrational assignments of the normal modes were performed on the basis of the potential energy distribution (PED) calculations. It can be proven from the comparative results of mitotane and its parent structure Dichlorodiphenyldichloroethane (DDD), the intramolecular nonbonding interaction between (C1sbnd H19⋯Cl18) in the ortho position which is calculated 2.583 Å and the position of the substitution takeover the vibrational wavenumber to redshift of 47 cm-1. In addition, natural bond orbital (NBO) analysis has been performed for analyzing charge delocalization throughout the molecule. Stability of the molecule arising from hyperconjugative interactions leading to its bioactivity and charge delocalization has been analyzed. 13C and 1H nuclear magnetic resonance chemical shifts of the molecule have been calculated using the gauge independent atomic orbital (GIAO) method and compared with published results.

  8. Clinical outcomes after estimated versus calculated activity of radioiodine for the treatment of hyperthyroidism: systematic review and meta-analysis.

    PubMed

    de Rooij, A; Vandenbroucke, J P; Smit, J W A; Stokkel, M P M; Dekkers, O M

    2009-11-01

    Despite the long experience with radioiodine for hyperthyroidism, controversy remains regarding the optimal method to determine the activity that is required to achieve long-term euthyroidism. To compare the effect of estimated versus calculated activity of radioiodine in hyperthyroidism. Design Systematic review and meta-analysis. We searched the databases Medline, EMBASE, Web of Science, and Cochrane Library for randomized and nonrandomized studies, comparing the effect of activity estimation methods with dosimetry for hyperthyroidism. The main outcome measure was the frequency of treatment success, defined as persistent euthyroidism after radioiodine treatment at the end of follow-up in the dose estimated and calculated dosimetry group. Furthermore, we assessed the cure rates of hyperthyroidism. Three randomized and five nonrandomized studies, comparing the effect of estimated versus calculated activity of radioiodine on clinical outcomes for the treatment of hyperthyroidism, were included. The weighted mean relative frequency of successful treatment outcome (euthyroidism) was 1.03 (95% confidence interval (CI) 0.91-1.16) for estimated versus calculated activity; the weighted mean relative frequency of cure of hyperthyroidism (eu- or hypothyroidism) was 1.03 (95% CI 0.96-1.10). Subgroup analysis showed a relative frequency of euthyroidism of 1.03 (95% CI 0.84-1.26) for Graves' disease and of 1.05 (95% CI 0.91-1.19) for toxic multinodular goiter. The two main methods used to determine the activity in the treatment of hyperthyroidism with radioiodine, estimated and calculated, resulted in an equally successful treatment outcome. However, the heterogeneity of the included studies is a strong limitation that prevents a definitive conclusion from this meta-analysis.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  10. The multistate impact parameter method for molecular charge exchange in nitrogen

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.

    1980-01-01

    The multistate impact parameter method is applied to the calculation of total cross sections for low energy change transfer between nitrogen ions and nitrogen molecules. Experimental data showing the relationships between total cross section and ion energy for various pressures and electron ionization energies were obtained. Calculated and experimental cross section values from the work are compared with the experimental and theoretical results of other investigators.

  11. The ϱ-ππ coupling constant in lattice gauge theory

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven; MacKenzie, Paul B.; Thacker, H. B.; Weingarten, Don

    1984-01-01

    We present a method for studying hadronic transitions in lattice gauge theory which requires computer time comparable to that required by recent hadron spectrum calculations. This method is applied to a calculation of the decay ϱ-->ππ. On leave from the Department of Physics, Indiana University, Bloomington, IN 47405, USA. Address after September 1, 1983: IBM, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA.

  12. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  13. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  14. Comparative investigation of methods to determine the group velocity dispersion of an endlessly single-mode photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Baselt, Tobias; Popp, Tobias; Nelsen, Bryan; Lasagni, Andrés. Fabián.; Hartmann, Peter

    2017-05-01

    Endlessly single-mode fibers, which enable single mode guidance over a wide spectral range, are indispensable in the field of fiber technology. A two-dimensional photonic crystal with a silica central core and a micrometer-spaced hexagonal array of air holes is an established method to achieve endless single-mode guidance. There are two possible ways to determine the dispersion: measurement and calculation. We calculate the group velocity dispersion GVD based on the measurement of the fiber structure parameters, the hole diameter and the pitch of a presumed homogeneous hexagonal array and compare the calculation with two methods to measure the wavelength-dependent time delay. We measure the time delay on a three hundred meter test fiber with a homemade supercontinuum light source, a set of bandpass filters and a fast detector and compare the results with a white light interferometric setup. To measure the dispersion of optical fibers with high accuracy, a time-frequency-domain setup based on a Mach-Zehnder interferometer is used. The experimental setup allows the determination of the wavelength dependent differential group delay of light travelling through a thirty centimeter piece of test fiber in the wavelength range from VIS to NIR. The determination of the GVD using different methods enables the evaluation of the individual methods for characterizing the endlessly single-mode fiber.

  15. First-principles calculations of Ti and O NMR chemical shift tensors in ferroelectric perovskites

    NASA Astrophysics Data System (ADS)

    Pechkis, Daniel; Walter, Eric; Krakauer, Henry

    2011-03-01

    Complementary chemical shift calculations were carried out with embedded clusters, using quantum chemistry methods, and with periodic boundary conditions, using the GIPAW approach within the Quantum Espresso package. Compared to oxygen chemical shifts, δ̂ (O), cluster calculations for δ̂ (Ti) were found to be more sensitive to size effects, termination, and choice of gaussian-type atomic basis set, while GIPAW results were found to be more sensitive to the pseudopotential construction. The two approaches complemented each other in optimizing these factors. We show that the two approaches yield comparable chemical shifts for suitably converged simulations, and results are compared with available experimental measurements. Supported by ONR.

  16. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  17. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  18. Experimental and theoretical NMR and IR studies of the side-chain orientation effects on the backbone conformation of dehydrophenylalanine residue.

    PubMed

    Buczek, Aneta M; Ptak, Tomasz; Kupka, Teobald; Broda, Małgorzata A

    2011-06-01

    Conformation of N-acetyl-(E)-dehydrophenylalanine N', N'-dimethylamide (Ac-(E)-ΔPhe-NMe(2)) in solution, a member of (E)-α, β-dehydroamino acids, was studied by NMR and infrared spectroscopy and the results were compared with those obtained for (Z) isomer. To support the spectroscopic interpretation, the Φ, Ψ potential energy surfaces were calculated at the MP2/6-31 + G(d,p) level of theory in chloroform solution modeled by the self-consistent reaction field-polarizable continuum model method. All minima were fully optimized by the MP2 method and their relative stabilities were analyzed in terms of π-conjugation, internal H-bonds and dipole interactions between carbonyl groups. The obtained NMR spectral features were compared with theoretical nuclear magnetic shieldings, calculated using Gauge Independent Atomic Orbitals (GIAO) approach and rescaled to theoretical chemical shifts using benzene as reference. The calculated indirect nuclear spin-spin coupling constants were compared with available experimental parameters. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Kang, Da; Zhong, Jingjun

    2017-10-01

    The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.

  20. Attempts at estimating mixed venous carbon dioxide tension by the single-breath method.

    PubMed

    Ohta, H; Takatani, O; Matsuoka, T

    1989-01-01

    The single-breath method was originally proposed by Kim et al. [1] for estimating the blood carbon dioxide tension and cardiac output. Its reliability has not been proven. The present study was undertaken, using dogs, to compare the mixed venous carbon dioxide tension (PVCO2) calculated by the single-breath method with the PVCO2 measured in mixed venous blood, and to evaluate the influence of variations in the exhalation duration and the volume of expired air usually discarded from computations as the deadspace. Among the exhalation durations of 15, 30 and 45 s tested, the 15 s duration was found to be too short to obtain an analyzable O2-CO2 curve, but at either 30 or 45 s, the calculated values of PVCO2 were comparable to the measured PVCO2. A significant agreement between calculated and measured PVCO2 was obtained when the expired gas with PCO2 less than 22 Torr was considered as deadspace gas.

  1. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  2. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  3. Electron scattering intensities and Patterson functions of Skyrmions

    NASA Astrophysics Data System (ADS)

    Karliner, M.; King, C.; Manton, N. S.

    2016-06-01

    The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.

  4. Theoretical calculations of structural, electronic, and elastic properties of CdSe1-x Te x : A first principles study

    NASA Astrophysics Data System (ADS)

    M, Shakil; Muhammad, Zafar; Shabbir, Ahmed; Muhammad Raza-ur-rehman, Hashmi; M, A. Choudhary; T, Iqbal

    2016-07-01

    The plane wave pseudo-potential method was used to investigate the structural, electronic, and elastic properties of CdSe1-x Te x in the zinc blende phase. It is observed that the electronic properties are improved considerably by using LDA+U as compared to the LDA approach. The calculated lattice constants and bulk moduli are also comparable to the experimental results. The cohesive energies for pure CdSe and CdTe binary and their mixed alloys are calculated. The second-order elastic constants are also calculated by the Lagrangian theory of elasticity. The elastic properties show that the studied material has a ductile nature.

  5. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  6. Method for determining formation quality factor from well log data and its application to seismic reservoir characterization

    DOEpatents

    Walls, Joel; Taner, M. Turhan; Dvorkin, Jack

    2006-08-08

    A method for seismic characterization of subsurface Earth formations includes determining at least one of compressional velocity and shear velocity, and determining reservoir parameters of subsurface Earth formations, at least including density, from data obtained from a wellbore penetrating the formations. A quality factor for the subsurface formations is calculated from the velocity, the density and the water saturation. A synthetic seismogram is calculated from the calculated quality factor and from the velocity and density. The synthetic seismogram is compared to a seismic survey made in the vicinity of the wellbore. At least one parameter is adjusted. The synthetic seismogram is recalculated using the adjusted parameter, and the adjusting, recalculating and comparing are repeated until a difference between the synthetic seismogram and the seismic survey falls below a selected threshold.

  7. Direct versus indirect many-body methods for calculating vertical electron affinities: applications to F -, OH - , NH 2-, CN -, Cl -, SH - and PH 2-

    NASA Astrophysics Data System (ADS)

    Ortiz, J. V.

    1987-05-01

    Electron propagator theory (EPT) is applied to calculating vertical ionization energies of the anions F -, Cl -, OH -,SH -, NH 2-, PH 2- and CN -. Third-order and outer valence approximation (OVA) quasiparticle calculations are compared with ΔMBPT(4) (MBPT, many-body perturbation theory) results using the same basis sets. Agreement with experiment is satisfactory for EPT calculations except for F - and OH -, while the ΔMBPT treatments fail for CN -. EPT(OVA) estimates are reliable when the discrepancy between second- and third-order results is small. Computational aspects are discussed, showing relative merits of direct and indirect methods for evaluating electron binding energies.

  8. Fast Laplace solver approach to pore-scale permeability

    NASA Astrophysics Data System (ADS)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  9. Methods for Estimating Payload/Vehicle Design Loads

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.

    1983-01-01

    Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.

  10. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods.

    PubMed

    Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.

  11. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    PubMed

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements

    NASA Astrophysics Data System (ADS)

    Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura

    2017-10-01

    This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.

  13. SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, M; Tobias, R; Pankuch, M

    Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less

  14. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  15. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  16. Structure and vibrational spectra of melaminium bis(trifluoroacetate) trihydrate: FT-IR, FT-Raman and quantum chemical calculations.

    PubMed

    Sangeetha, V; Govindarajan, M; Kanagathara, N; Marchewka, M K; Gunasekaran, S; Anbalagan, G

    2014-05-05

    Melaminium bis(trifluoroacetate) trihydrate (MTFA), an organic material has been synthesized and single crystals of MTFA have been grown by the slow solvent evaporation method at room temperature. X-ray powder diffraction analysis confirms that MTFA crystal belongs to the monoclinic system with space group P2/c. The molecular geometry, vibrational frequencies and intensity of the vibrational bands have been interpreted with the aid of structure optimization based on density functional theory (DFT) B3LYP method with 6-311G(d,p) and 6-311++G(d,p) basis sets. The X-ray diffraction data have been compared with the data of optimized molecular structure. The theoretical results show that the crystal structure can be reproduced by optimized geometry and the vibrational frequencies show good agreement with the experimental values. The nuclear magnetic resonance (NMR) chemical shift of the molecule has been calculated by the gauge independent atomic orbital (GIAO) method and compared with experimental results. HOMO-LUMO, and other related molecular and electronic properties are calculated. The Mulliken and NBO charges have also been calculated and interpreted. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  18. Comparison of different methods used in integral codes to model coagulation of aerosols

    NASA Astrophysics Data System (ADS)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  19. Efficient and accurate treatment of electron correlations with correlation matrix renormalization theory

    DOE PAGES

    Yao, Y. X.; Liu, J.; Liu, C.; ...

    2015-08-28

    We present an efficient method for calculating the electronic structure and total energy of strongly correlated electron systems. The method extends the traditional Gutzwiller approximation for one-particle operators to the evaluation of the expectation values of two particle operators in the many-electron Hamiltonian. The method is free of adjustable Coulomb parameters, and has no double counting issues in the calculation of total energy, and has the correct atomic limit. We demonstrate that the method describes well the bonding and dissociation behaviors of the hydrogen and nitrogen clusters, as well as the ammonia composed of hydrogen and nitrogen atoms. We alsomore » show that the method can satisfactorily tackle great challenging problems faced by the density functional theory recently discussed in the literature. The computational workload of our method is similar to the Hartree-Fock approach while the results are comparable to high-level quantum chemistry calculations.« less

  20. Residual stress in glass: indentation crack and fractography approaches

    PubMed Central

    Anunmana, Chuchai; Anusavice, Kenneth J.; Mecholsky, John J.

    2009-01-01

    Objective To test the hypothesis that the indentation crack technique can determine surface residual stresses that are not statistically significantly different from those determined from the analytical procedure using surface cracks, the four-point flexure test, and fracture surface analysis. Methods Soda-lime-silica glass bar specimens (4 mm × 2.3 mm × 28 mm) were prepared and annealed at 650 °C for 30 min before testing. The fracture toughness values of the glass bars were determined from 12 specimens based on induced surface cracks, four-point flexure, and fractographic analysis. To determine the residual stress from the indentation technique, 18 specimens were indented under 19.6 N load using a Vickers microhardness indenter. Crack lengths were measured within 1 min and 24 h after indentation, and the measured crack lengths were compared with the mean crack lengths of annealed specimens. Residual stress was calculated from an equation developed for the indentation technique. All specimens were fractured in a four-point flexure fixture and the residual stress was calculated from the strength and measured crack sizes on the fracture surfaces. Results The results show that there was no significant difference between the residual stresses calculated from the two techniques. However, the differences in mean residual stresses calculated within 1 min compared with those calculated after 24 h were statistically significant (p=0.003). Significance This study compared the indentation technique with the fractographic analysis method for determining the residual stress in the surface of soda-lime silica glass. The indentation method may be useful for estimating residual stress in glass. PMID:19671475

  1. Two innovative pore pressure calculation methods for shallow deep-water formations

    NASA Astrophysics Data System (ADS)

    Deng, Song; Fan, Honghai; Liu, Yuhan; He, Yanfeng; Zhang, Shifeng; Yang, Jing; Fu, Lipei

    2017-11-01

    There are many geological hazards in shallow formations associated with oil and gas exploration and development in deep-water settings. Abnormal pore pressure can lead to water flow and gas and gas hydrate accumulations, which may affect drilling safety. Therefore, it is of great importance to accurately predict pore pressure in shallow deep-water formations. Experience over previous decades has shown, however, that there are not appropriate pressure calculation methods for these shallow formations. Pore pressure change is reflected closely in log data, particularly for mudstone formations. In this paper, pore pressure calculations for shallow formations are highlighted, and two concrete methods using log data are presented. The first method is modified from an E. Philips test in which a linear-exponential overburden pressure model is used. The second method is a new pore pressure method based on P-wave velocity that accounts for the effect of shallow gas and shallow water flow. Afterwards, the two methods are validated using case studies from two wells in the Yingqiong basin. Calculated results are compared with those obtained by the Eaton method, which demonstrates that the multi-regression method is more suitable for quick prediction of geological hazards in shallow layers.

  2. Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes

    NASA Astrophysics Data System (ADS)

    Goncharov, K. A.; Denisov, I. A.

    2018-03-01

    The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.

  3. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  4. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  5. Flow processes in overexpanded chemical rocket nozzles. Part 2: Side loads due to asymmetric separation

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1984-01-01

    Methods for measuring the lateral forces, occurring as a result of asymmetric nozzle flow separation, are discussed. The effect of some parameters on the side load is explained. A new method was developed for calculation of the side load. The values calculated are compared with side load data of the J-2 engine. Results are used for predicting side loads of the space shuttle main engine.

  6. Estimations of global warming potentials from computational chemistry calculations for CH(2)F(2) and other fluorinated methyl species verified by comparison to experiment.

    PubMed

    Blowers, Paul; Hollingshead, Kyle

    2009-05-21

    In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.

  7. DFT analysis on the molecular structure, vibrational and electronic spectra of 2-(cyclohexylamino)ethanesulfonic acid

    NASA Astrophysics Data System (ADS)

    Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.

    2015-02-01

    The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule.

  8. Molecular structure, electronic properties, NLO, NBO analysis and spectroscopic characterization of Gabapentin with experimental (FT-IR and FT-Raman) techniques and quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Sinha, Leena; Karabacak, Mehmet; Narayan, V.; Cinar, Mehmet; Prasad, Onkar

    2013-05-01

    Gabapentin (GP), structurally related to the neurotransmitter GABA (gamma-aminobutyric acid), mimics the activity of GABA and is also widely used in neurology for the treatment of peripheral neuropathic pain. It exists in zwitterionic form in solid state. The present communication deals with the quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of GP using density functional (DFT/B3LYP) method with 6-311++G(d,p) basis set. In view of the fact that amino acids exist as zwitterions as well as in the neutral form depending on the environment (solvent, pH, etc.), molecular properties of both the zwitterionic and neutral form of GP have been analyzed. The fundamental vibrational wavenumbers as well as their intensities were calculated and compared with experimental FT-IR and FT-Raman spectra. The fundamental assignments were done on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanical (SQM) method. The electric dipole moment, polarizability and the first hyperpolarizability values of the GP have been calculated at the same level of theory and basis set. The nonlinear optical (NLO) behavior of zwitterionic and neutral form has been compared. Stability of the molecule arising from hyper-conjugative interactions and charge delocalization has been analyzed using natural bond orbital analysis. Ultraviolet-visible (UV-Vis) spectrum of the title molecule has also been calculated using TD-DFT method. The thermodynamic properties of both the zwitterionic and neutral form of GP at different temperatures have been calculated.

  9. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Nicholson, S.; Moore, J.

    1986-01-01

    A method was developed which calculates two-dimensional, transonic, viscous flow in ducts. The finite volume, time marching formulation is used to obtain steady flow solutions of the Reynolds-averaged form of the Navier Stokes equations. The entire calculation is performed in the physical domain. The method is currently limited to the calculation of attached flows. The features of the current method can be summarized as follows. Control volumes are chosen so that smoothing of flow properties, typically required for stability, is now needed. Different time steps are used in the different governing equations to improve the convergence speed of the viscous calculations. A new pressure interpolation scheme is introduced which improves the shock capturing ability of the method. A multi-volume method for pressure changes in the boundary layer allows calculations which use very long and thin control volumes. A special discretization technique is also used to stabilize these calculations. A special formulation of the energy equation is used to provide improved transient behavior of solutions which use the full energy equation. The method is then compared with a wide variety of test cases. The freestream Mach numbers range from 0.075 to 2.8 in the calculations. Transonic viscous flow in a converging diverging nozzle is calculated with the method; the Mach number upstream of the shock is approximately 1.25. The agreement between the calculated and measured shock strength and total pressure losses is good. Essentially incompressible turbulent boundary layer flow in a adverse pressure gradient is calculated and the computed distribution of mean velocity and shear stress are in good agreement with the measurements. At the other end of the Mach number range, a flat plate turbulent boundary layer with a freestream Mach number of 2.8 is calculated using the full energy equation; the computed total temperature distribution and recovery factor agree well with the measurements when a variable Prandtl number is used through the boundary layer.

  10. Calculation of light delay for coupled microrings by FDTD technique and Padé approximation.

    PubMed

    Huang, Yong-Zhen; Yang, Yue-De

    2009-11-01

    The Padé approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Padé approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Padé approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Padé approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials.

  11. Comparison of different phase retrieval algorithms

    NASA Astrophysics Data System (ADS)

    Kaufmann, Rolf; Plamondon, Mathieu; Hofmann, Jürgen; Neels, Antonia

    2017-09-01

    X-ray phase contrast imaging is attracting more and more interest. Since the phase cannot be measured directly an indirect method using e.g. a grating interferometer has to be applied. This contribution compares three different approaches to calculate the phase from Talbot-Lau interferometer measurements using a phase-stepping approach. Besides the usually applied Fourier coefficient method also a linear fitting technique and Taylor series expansion method are applied and compared.

  12. Retrospective Methods Analysis of Semiautomated Intracerebral Hemorrhage Volume Quantification From a Selection of the STICH II Cohort (Early Surgery Versus Initial Conservative Treatment in Patients With Spontaneous Supratentorial Lobar Intracerebral Haematomas).

    PubMed

    Haley, Mark D; Gregson, Barbara A; Mould, W Andrew; Hanley, Daniel F; Mendelow, Alexander David

    2018-02-01

    The ABC/2 method for calculating intracerebral hemorrhage (ICH) volume has been well validated. However, the formula, derived from the volume of an ellipse, assumes the shape of ICH is elliptical. We sought to compare the agreement of the ABC/2 formula with other methods through retrospective analysis of a selection of the STICH II cohort (Early Surgery Versus Initial Conservative Treatment in Patients With Spontaneous Supratentorial Lobar Intracerebral Haematomas). From 390 patients, 739 scans were selected from the STICH II image archive based on the availability of a CT scan compatible with OsiriX DICOM viewer. ICH volumes were calculated by the reference standard semiautomatic segmentation in OsiriX software and compared with calculated arithmetic methods (ABC/2, ABC/2.4, ABC/3, and 2/3SC) volumes. Volumes were compared by difference plots for specific groups: randomization ICH (n=374), 3- to 7-day postsurgical ICH (n=206), antithrombotic-associated ICH (n=79), irregular-shape ICH (n=703) and irregular-density ICH (n=650). Density and shape were measured by the Barras ordinal shape and density groups (1-5). The ABC/2.4 method had the closest agreement to the semiautomatic segmentation volume in all groups, except for the 3- to 7-day postsurgical ICH group where the ABC/3 method was superior. Although the ABC/2 formula for calculating elliptical ICH is well validated, it must be used with caution in ICH scans where the elliptical shape of ICH is a false assumption. We validated the adjustment of the ABC/2.4 method in randomization, antithrombotic-associated, heterogeneous-density, and irregular-shape ICH. URL: http://www.isrctn.com/ISRCTN22153967. Unique identifier: ISRCTN22153967. © 2018 American Heart Association, Inc.

  13. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  14. Integrating concepts and skills: Slope and kinematics graphs

    NASA Astrophysics Data System (ADS)

    Tonelli, Edward P., Jr.

    The concept of force is a foundational idea in physics. To predict the results of applying forces to objects, a student must be able to interpret data representing changes in distance, time, speed, and acceleration. Comprehension of kinematics concepts requires students to interpret motion graphs, where rates of change are represented as slopes of line segments. Studies have shown that majorities of students who show proficiency with mathematical concepts fail accurately to interpret motion graphs. The primary aim of this study was to examine how students apply their knowledge of slope when interpreting kinematics graphs. To answer the research questions a mixed methods research design, which included a survey and interviews, was adopted. Ninety eight (N=98) high school students completed surveys which were quantitatively analyzed along with qualitative information collected from interviews of students (N=15) and teachers ( N=2). The study showed that students who recalled methods for calculating slopes and speeds calculated slopes accurately, but calculated speeds inaccurately. When comparing the slopes and speeds, most students resorted to calculating instead of visual inspection. Most students recalled and applied memorized rules. Students who calculated slopes and speeds inaccurately failed to recall methods of calculating slopes and speeds, but when comparing speeds, these students connected the concepts of distance and time to the line segments and the rates of change they represented. This study's findings will likely help mathematics and science educators to better assist their students to apply their knowledge of the definition of slope and skills in kinematics concepts.

  15. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  16. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  17. A simple method of calculating Stirling engines for engine design optimization

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.

  18. Electric dipole transitions for four-times ionized cerium (Ce V)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usta, Betül Karaçoban, E-mail: bkaracoban@sakarya.edu.tr; Akgün, Elif, E-mail: elif.akgun@ogr.sakarya.edu.tr; Alparslan, Büşra, E-mail: busra.alparslan1@ogr.sakarya.edu.tr

    2016-03-25

    We have calculated the transition parameters, such as wavelengths, oscillator strengths, and transition probabilities (or rates), for the electric dipole (E1) transitions in four-times ionized cerium (Ce V, Z = 58) by using the multiconfiguration Hartree-Fock method within the framework of Breit-Pauli (MCHF+BP) relativistic corrections and the relativistic Hartree-Fock (HFR) method. The obtained results have been compared with other works available in literature. A discussion of these calculations for Ce V in this study has also been in view of the MCHF+BP and HFR methods.

  19. An Eulerian/Lagrangian method for computing blade/vortex impingement

    NASA Technical Reports Server (NTRS)

    Steinhoff, John; Senge, Heinrich; Yonghu, Wenren

    1991-01-01

    A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.

  20. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Effects of Earth's curvature in full-wave modeling of VLF propagation

    NASA Astrophysics Data System (ADS)

    Qiu, L.; Lehtinen, N. G.; Inan, U. S.; Stanford VLF Group

    2011-12-01

    We show how to include curvature in the full-wave finite element approach to calculate ELF/VLF wave propagation in horizontally stratified earth-ionosphere waveguide. A general curvilinear stratified system is considered, and the numerical solutions of full-wave method in curvilinear system are compared with the analytic solutions in the cylindrical and spherical waveguides filled with an isotropic medium. We calculate the attenuation and height gain for modes in the Earth-ionosphere waveguide, taking into account the anisotropicity of ionospheric plasma, for different assumptions about the Earth's curvature, and quantify the corrections due to the curvature. The results are compared with the results of previous models, such as LWPC, as well as with ground and satellite observations, and show improved accuracy compared with full-wave method without including the curvature effect.

  2. Inclusive breakup calculations in angular momentum basis: Application to 7Li+58Ni

    NASA Astrophysics Data System (ADS)

    Lei, Jin

    2018-03-01

    The angular momentum basis method is introduced to solve the inclusive breakup problem within the model proposed by Ichimura, Austern, and Vincent [Phys. Rev. C 32, 431 (1985), 10.1103/PhysRevC.32.431]. This method is based on the geometric transformation between different Jacobi coordinates, in which the particle spins can be included in a natural and efficient way. To test the validity of this partial wave expansion method, a benchmark calculation is done comparing with the one given by Lei and Moro [Phys. Rev. C 92, 044616 (2015), 10.1103/PhysRevC.92.044616]. In addition, using the distorted-wave Born approximation version of the IAV model, applications to 7Li+58Ni reactions at energies around Coulomb barrier are presented and compared with available data.

  3. Excitons in Potassium Bromide: A Study using Embedded Time-dependent Density Functional Theory and Equation-of-Motion Coupled Cluster Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govind, Niranjan; Sushko, Petr V.; Hess, Wayne P.

    2009-03-05

    We present a study of the electronic excitations in insulating materials using an embedded- cluster method. The excited states of the embedded cluster are studied systematically using time-dependent density functional theory (TDDFT) and high-level equation-of-motion coupled cluster (EOMCC) methods. In particular, we have used EOMCC models with singles and doubles (EOMCCSD) and two approaches which account for the e®ect of triply excited con¯gurations in non-iterative and iterative fashions. We present calculations of the lowest surface excitations of the well-studied potassium bromide (KBr) system and compare our results with experiment. The bulk-surface exciton shift is also calculated at the TDDFT levelmore » and compared with experiment.« less

  4. Tight-binding study of stacking fault energies and the Rice criterion of ductility in the fcc metals

    NASA Astrophysics Data System (ADS)

    Mehl, Michael J.; Papaconstantopoulos, Dimitrios A.; Kioussis, Nicholas; Herbranson, M.

    2000-02-01

    We have used the Naval Research Laboratory (NRL) tight-binding (TB) method to calculate the generalized stacking fault energy and the Rice ductility criterion in the fcc metals Al, Cu, Rh, Pd, Ag, Ir, Pt, Au, and Pb. The method works well for all classes of metals, i.e., simple metals, noble metals, and transition metals. We compared our results with full potential linear-muffin-tin orbital and embedded atom method (EAM) calculations, as well as experiment, and found good agreement. This is impressive, since the NRL-TB approach only fits to first-principles full-potential linearized augmented plane-wave equations of state and band structures for cubic systems. Comparable accuracy with EAM potentials can be achieved only by fitting to the stacking fault energy.

  5. Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.

    PubMed

    Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik

    2015-10-01

    The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.

  6. Automated Speech Rate Measurement in Dysarthria

    ERIC Educational Resources Information Center

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  7. Calculation of unsteady airfoil loads with and without flap deflection at -90 degrees incidence

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1991-01-01

    A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This unique method provides for the direct solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body-fitted computational mesh incorporating a staggered grid method. The vorticity is determined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and for the conservation of mass at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the conservation of mass to machine zero at each time-step. The results of the present analysis and experimental results obtained for a XV-15 airfoil are compared. The comparisons indicate that the calculated drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results. Comparisons of the numerical results of the present method for several airfoils demonstrate the significant influence of airfoil curvature and flap deflection on the predicted download.

  8. Calculating mercury loading to the tidal Hudson River, New York, using rating curve and surrogate methodologies

    USGS Publications Warehouse

    Wall, G.R.; Ingleston, H.H.; Litten, S.

    2005-01-01

    Total mercury (THg) load in rivers is often calculated from a site-specific "rating-curve" based on the relation between THg concentration and river discharge along with a continuous record of river discharge. However, there is no physical explanation as to why river discharge should consistently predict THg or any other suspended analyte. THg loads calculated by the rating-curve method were compared with those calculated by a "continuous surrogate concentration" (CSC) method in which a relation between THg concentration and suspended-sediment concentration (SSC) is constructed; THg loads then can be calculated from the continuous record of SSC and river discharge. The rating-curve and CSC methods, respectively, indicated annual THg loads of 46.4 and 75.1 kg for the Mohawk River, and 52.9 and 33.1 kg for the upper Hudson River. Differences between the results of the two methods are attributed to the inability of the rating-curve method to adequately characterize atypical high flows such as an ice-dam release, or to account for hysteresis, which typically degrades the strength of the relation between stream discharge and concentration of material in suspension. ?? Springer 2005.

  9. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  10. A new Schiff base compound N,N'-(2,2-dimetylpropane)-bis(dihydroxylacetophenone): synthesis, experimental and theoretical studies on its crystal structure, FTIR, UV-visible, 1H NMR and 13C NMR spectra.

    PubMed

    Saheb, Vahid; Sheikhshoaie, Iran

    2011-10-15

    The Schiff base compound, N,N'-(2,2-dimetylpropane)-bis(dihydroxylacetophenone) (NDHA) is synthesized through the condensation of 2-hydroxylacetophenone and 2,2-dimethyl 1,3-amino propane in methanol at ambient temperature. The yellow crystalline precipitate is used for X-ray single-crystal determination and measuring Fourier transform infrared (FTIR), UV-visible, (1)H NMR and (13)C NMR spectra. Electronic structure calculations at the B3LYP, PBEPBE and PW91PW91 levels of theory are performed to optimize the molecular geometry and to calculate the FTIR, (1)H NMR and (13)C NMR spectra of the compound. Time-dependent density functional theory (TDDFT) method is used to calculate the UV-visible spectrum of NDHA. Vibrational frequencies are determined experimentally and compared with those obtained theoretically. Vibrational assignments and analysis of the fundamental modes of the compound are also performed. All theoretical methods can well reproduce the structure of the compound. The (1)H NMR and (13)C NMR chemical shifts calculated by all DFT methods are consistent with the experimental data. However, the NMR shielding tensors computed at the B3LYP/6-31+G(d,p) level of theory are in better agreement with experimental (1)H NMR and (13)C NMR spectra. The electronic absorption spectrum calculated at the B3LYP/6-31+G(d,p) level by using TD-DFT method is in accordance with the observed UV-visible spectrum of NDHA. In addition, some quantum descriptors of the molecule are calculated and conformational analysis is performed and the results were compared with the crystallographic data. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Comparative study of ab initio nonradiative recombination rate calculations under different formalisms

    NASA Astrophysics Data System (ADS)

    Shi, Lin; Xu, Ke; Wang, Lin-Wang

    2015-05-01

    Nonradiative carrier recombination is of both great applied and fundamental importance, but the correct ab initio approaches to calculate it remain to be inconclusive. Here we used five different approximations to calculate the nonradiative carrier recombinations of two complex defect structures GaP :Z nGa-OP and GaN :Z nGa-VN , and compared the results with experiments. In order to apply different multiphonon assisted electron transition formalisms, we have calculated the electron-phonon coupling constants by ab initio density functional theory for all phonon modes. Compared with different methods, the capture coefficients calculated by the static coupling theory are 4.30 ×10-8 and 1.46 ×10-7c m3/s for GaP :Z nGa-OP and GaN :Z nGa-VN , which are in good agreement with the experiment results, (4-1+2) ×10-8 and 3.0 ×10-7c m3/s , respectively. We also provided arguments for why the static coupling theory should be used to calculate the nonradiative decays of semiconductors.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  13. Two-Photon Transitions in Hydrogen-Like Atoms

    NASA Astrophysics Data System (ADS)

    Martinis, Mladen; Stojić, Marko

    Different methods for evaluating two-photon transition amplitudes in hydrogen-like atoms are compared with the improved method of direct summation. Three separate contributions to the two-photon transition probabilities in hydrogen-like atoms are calculated. The first one coming from the summation over discrete intermediate states is performed up to nc(max) = 35. The second contribution from the integration over the continuum states is performed numerically. The third contribution coming from the summation from nc(max) to infinity is calculated in an approximate way using the mean level energy for this region. It is found that the choice of nc(max) controls the numerical error in the calculations and can be used to increase the accuracy of the results much more efficiently than in other methods.

  14. Very large scale wavefunction orthogonalization in Density Functional Theory electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Bekas, C.; Curioni, A.

    2010-06-01

    Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.

  15. Molecular dynamics simulations of bubble formation and cavitation in liquid metals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Insepov, Z.; Hassanein, A.; Bazhirov, T. T.

    2007-11-01

    Thermodynamics and kinetics of nano-scale bubble formation in liquid metals such as Li and Pb were studied by molecular dynamics (MD) simulations at pressures typical for magnetic and inertial fusion. Two different approaches to bubble formation were developed. In one method, radial densities, pressures, surface tensions, and work functions of the cavities in supercooled liquid lithium were calculated and compared with the surface tension experimental data. The critical radius of a stable cavity in liquid lithium was found for the first time. In the second method, the cavities were created in the highly stretched region of the liquid phase diagram;more » and then the stability boundary and the cavitation rates were calculated in liquid lead. The pressure dependences of cavitation frequencies were obtained over the temperature range 700-2700 K in liquid Pb. The results of MD calculations for cavitation rate were compared with estimates of classical nucleation theory (CNT).« less

  16. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital

    PubMed Central

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2016-01-01

    Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974

  17. On Calculation Methods and Results for Straight Cylindrical Roller Bearing Deflection, Stiffness, and Stress

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2011-01-01

    The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.

  18. Calculation of recoil implantation profiles using known range statistics

    NASA Technical Reports Server (NTRS)

    Fung, C. D.; Avila, R. E.

    1985-01-01

    A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.

    We consider the sign problem for classical spin models at complexmore » $$\\beta =1/g_0^2$$ on $$L\\times L$$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$$\\beta$$ than the reweighting Monte Carlo method. For the Ising model with complex $$\\beta$$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $$L\\times L$$ lattices when the number of states $$D_s$$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.« less

  20. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  1. Experiences with leak rate calculations methods for LBB application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebner, H.; Kastner, W.; Hoefler, A.

    1997-04-01

    In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations.

  2. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  3. Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation

    NASA Technical Reports Server (NTRS)

    Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.

  4. Sense and Avoid Safety Analysis for Remotely Operated Unmanned Aircraft in the National Airspace System. Version 5

    NASA Technical Reports Server (NTRS)

    Carreno, Victor

    2006-01-01

    This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.

  5. Predicting solute partitioning in lipid bilayers: Free energies and partition coefficients from molecular dynamics simulations and COSMOmic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobtorweihen, S., E-mail: jakobtorweihen@tuhh.de; Ingram, T.; Gerlach, T.

    2014-07-28

    Quantitative predictions of biomembrane/water partition coefficients are important, as they are a key property in pharmaceutical applications and toxicological studies. Molecular dynamics (MD) simulations are used to calculate free energy profiles for different solutes in lipid bilayers. How to calculate partition coefficients from these profiles is discussed in detail and different definitions of partition coefficients are compared. Importantly, it is shown that the calculated coefficients are in quantitative agreement with experimental results. Furthermore, we compare free energy profiles from MD simulations to profiles obtained by the recent method COSMOmic, which is an extension of the conductor-like screening model for realisticmore » solvation to micelles and biomembranes. The free energy profiles from these molecular methods are in good agreement. Additionally, solute orientations calculated with MD and COSMOmic are compared and again a good agreement is found. Four different solutes are investigated in detail: 4-ethylphenol, propanol, 5-phenylvaleric acid, and dibenz[a,h]anthracene, whereby the latter belongs to the class of polycyclic aromatic hydrocarbons. The convergence of the free energy profiles from biased MD simulations is discussed and the results are shown to be comparable to equilibrium MD simulations. For 5-phenylvaleric acid the influence of the carboxyl group dihedral angle on free energy profiles is analyzed with MD simulations.« less

  6. A new self-shielding method based on a detailed cross-section representation in the resolved energy domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saygin, H.; Hebert, A.

    The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less

  7. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  8. Search for neutrino transitions to sterile states using an intense beta source

    NASA Astrophysics Data System (ADS)

    Oralbaev, A. Yu.; Skorokhvatov, M. D.; Titov, O. A.

    2017-11-01

    The results of beta spectrum calculations for two 144Pr decay branches are presented, which are of interest for reconstructing the spectrum of antineutrinos from the 144Ce-144Pr source to be used in the SOX experiment on the search for sterile neutrinos. The main factors affecting the beta spectrum are analyzed, their calculation methods are given, and calculations are compared with experiment.

  9. Calculations, and comparison with an ideal minimum, of trimmed drag for conventional and canard configurations having various levels of static stability

    NASA Technical Reports Server (NTRS)

    Mclaughlin, M. D.

    1977-01-01

    Classical drag equations were used to calculate total and induced drag and ratios of stabilizer lift to wing lift for a variety of conventional and canard configurations. The Flight efficiencies of such configurations that are trimmed in pitch and have various values of static margin are evaluated. Classical calculation methods are compared with more modern lifting surface theory.

  10. Assessment of radiant temperature in a closed incubator.

    PubMed

    Décima, Pauline; Stéphan-Blanchard, Erwan; Pelletier, Amandine; Ghyselen, Laurent; Delanaud, Stéphane; Dégrugilliers, Loïc; Telliez, Frédéric; Bach, Véronique; Libert, Jean-Pierre

    2012-08-01

    In closed incubators, radiative heat loss (R) which is assessed from the mean radiant temperature (Tr) accounts for 40-60% of the neonate's total heat loss. In the absence of a benchmark method to calculate Tr--often considered to be the same as the air incubator temperature-errors could have a considerable impact on the thermal management of neonates. We compared Tr using two conventional methods (measurement with a black-globe thermometer and a radiative "view factor" approach) and two methods based on nude thermal manikins (a simple, schematic design from Wheldon and a multisegment, anthropometric device developed in our laboratory). By taking the Tr estimations for each method, we calculated metabolic heat production values by partitional calorimetry and then compared them with the values calculated from V(O2) and V(CO2) measured in 13 preterm neonates. Comparisons between the calculated and measured metabolic heat production values showed that the two conventional methods and Wheldon's manikin underestimated R, whereas when using the anthropomorphic thermal manikin, the simulated versus clinical difference was not statistically significant. In conclusion, there is a need for a safety standard for measuring TR in a closed incubator. This standard should also make available estimating equations for all avenues of the neonate's heat exchange considering the metabolic heat production and the modifying influence of the thermal insulation provided by the diaper and by the mattress. Although thermal manikins appear to be particularly appropriate for measuring Tr, the current lack of standardized procedures limits their widespread use.

  11. Comparative study on power generation of dual-cathode microbial fuel cell according to polarization methods.

    PubMed

    Lee, Kang-yu; Ryu, Wyan-seuk; Cho, Sung-il; Lim, Kyeong-ho

    2015-11-01

    Microbial fuel cells (MFCs) exist in various forms depending on the type of pollutant to be removed and the expected performance. Dual-cathode MFCs, with their simple structure, are capable of removing both organic matter and nitrogen. Moreover, various methods are available for the collection of polarization data, which can be used to calculate the maximum power density, an important factor of MFCs. Many researchers prefer the method of varying the external resistance in a single-cycle due to the short measurement time and high accuracy. This study compared power densities of dual-cathode MFCs in a single-cycle with values calculated over multi-cycles to determine the optimal polarization method. External resistance was varied from high to low and vice versa in the single-cycle, to calculate power density. External resistance was organized in descending order with initial start-up at open circuit voltage (OCV), and then it was organized in descending order again after the initial start-up at 1000 Ω. As a result, power density was underestimated at the anoxic cathode when the external resistance was varied from low to high, and overestimated at the aerobic cathode and anoxic cathode when external resistance at OCV was reduced following initial start-up. In calculating the power densities of dual-cathode MFCs, this paper recommends the method of gradually reducing the external resistance after initial start-up with high external resistance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A STUDY OF PREDICTED BONE MARROW DISTRIBUTION ON CALCULATED MARROW DOSE FROM EXTERNAL RADIATION EXPOSURES USING TWO SETS OF IMAGE DATA FOR THE SAME INDIVIDUAL

    PubMed Central

    Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George

    2010-01-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219

  13. A study of predicted bone marrow distribution on calculated marrow dose from external radiation exposures using two sets of image data for the same individual.

    PubMed

    Caracappa, Peter F; Chao, T C Ephraim; Xu, X George

    2009-06-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.

  14. A practical method to avoid zero-point leak in molecular dynamics calculations: application to the water dimer.

    PubMed

    Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M

    2010-04-28

    We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.

  15. A study of the chiro-optical properties of Carvone

    NASA Astrophysics Data System (ADS)

    Lambert, Jason

    2011-10-01

    The intrinsic optical rotatory dispersion (IORD) and circular dichroism (CD) of the conformationally flexible carvone molecule has been investigated in 17 solvents and compared with results from calculations for the ``free'' (gas phase) molecule. The G3 method was used to determine the relative energies of the six conformers. The ORD of (R)-(-)-carvone at 589 nm was calculated using coupled cluster and density-functional methods, including temperature-dependent vibrational corrections. Vibrational corrections are significant and are primarily associated with normal modes involving the stereogenic carbon atom and the carbonyl group, whose n->&*circ; excitation plays a significant role in the chiroptical response of carvone. However, without the vibrational correction the calculated ORD is of opposite sign to that of the experiment for the CCSD and B3LYP methods. Calculations performed in solution using the PCM model were also opposite in sign to of the experiment when using the B3LYP density functional.

  16. Analytical Method Used to Calculate Pile Foundations with the Widening Up on a Horizontal Static Impact

    NASA Astrophysics Data System (ADS)

    Kupchikova, N. V.; Kurbatskiy, E. N.

    2017-11-01

    This paper presents a methodology for the analytical research solutions for the work pile foundations with surface broadening and inclined side faces in the ground array, based on the properties of Fourier transform of finite functions. The comparative analysis of the calculation results using the suggested method for prismatic piles, piles with surface broadening prismatic with precast piles and end walls with precast wedges on the surface is described.

  17. Comparative Analysis of 2-D Versus 3-D Ultrasound Estimation of the Fetal Adrenal Gland Volume and Prediction of Preterm Birth

    PubMed Central

    Turan, Ozhan M.; Turan, Sifa; Buhimschi, Irina A.; Funai, Edmund F.; Campbell, Katherine H.; Bahtiyar, Ozan M.; Harman, Chris R.; Copel, Joshua A.; Baschat, Ahmet A; Buhimschi, Catalin S.

    2013-01-01

    Objective We aim to test the hypothesis that 2D fetal AGV measurements offer similar volume estimates as volume calculations based on 3D technique Methods Fetal AGV was estimated by 3D ultrasound (VOCAL) in 93 women with signs/symptoms of preterm labor and 73 controls. Fetal AGV was calculated using an ellipsoid formula derived from 2D measurements of the same blocks (0.523× length × width × depth). Comparisons were performed by intra-class correlation coefficient (ICC), coefficient of repeatability, and Bland-Altman method. The cAGV (AGV/fetal weight) was calculated for both methods and compared for prediction of PTB within 7 days. Results Among 168 volumes, there was a significant correlation between 3D and 2D methods (ICC=0.979[95%CI: 0.971-0.984]). The coefficient of repeatability for the 3D was superior to the 2D method (Intra-observer 3D: 30.8, 2D:57.6; inter-observer 3D: 12.2, 2D: 15.6). Based on 2D calculations, a cAGV≥433mm3/kg, was best for prediction of PTB (sensitivity: 75%(95%CI=59-87); specificity: 89%(95%CI=82-94). Sensitivity and specificity for the 3D cAGV (cut-off ≥420mm3/kg) was 85%(95%CI=70-94) and 95%(95%CI=90-98), respectively. In receiver-operating-curve curve analysis, 3D cAGV was superior to 2D cAGV for prediction of PTB (z=1.99, p=0.047). Conclusion 2D volume estimation of fetal adrenal gland using ellipsoid formula cannot replace 3D AGV calculations for prediction of PTB. PMID:22644825

  18. Use of condensed videos in a flipped classroom for pharmaceutical calculations: Student perceptions and academic performance.

    PubMed

    Gloudeman, Mark W; Shah-Manek, Bijal; Wong, Terri H; Vo, Christina; Ip, Eric J

    2018-02-01

    The flipped teaching method was implemented through a series of multiple condensed videos for pharmaceutical calculations with student perceptions and academic performance assessed post-intervention. Student perceptions from the intervention group were assessed via an online survey. Pharmaceutical exam scores of the intervention group were compared to the control group. The intervention group spent a greater amount of class time on active learning. The majority of students (68.2%) thought that the flipped teaching method was more effective to learn pharmaceutical calculations than the traditional method. The mean exam scores of the intervention group were not significantly different than the control group (80.5 ± 15.8% vs 77.8 ± 16.8%; p = 0.253). Previous studies on the flipped teaching method have shown mixed results in regards to student perceptions and exam scores, where either student satisfaction increased or exam scores improved, but rarely both. The flipped teaching method was rated favorably by a majority of students. The flipped teaching method resulted in similar outcomes in pharmaceutical calculations exam scores, and it appears to be an acceptable and effective option to deliver pharmaceutical calculations in a Doctor of Pharmacy program. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Effectiveness of the Stewart Method in the Evaluation of Blood Gas Parameters.

    PubMed

    Gezer, Mustafa; Bulucu, Fatih; Ozturk, Kadir; Kilic, Selim; Kaldirim, Umit; Eyi, Yusuf Emrah

    2015-03-01

    In 1981, Peter A. Stewart published a paper describing his concept for employing Strong Ion Difference. In this study we compared the HCO3 levels and Anion Gap (AG) calculated using the classic method and the Stewart method. Four hundred nine (409) arterial blood gases of 90 patients were collected retrospectively. Some were obtained from the same patients in different times and conditions. All blood samples were evaluated using the same device (ABL 800 Blood Gas Analyzer). HCO3 level and AG were calculated using the Stewart method via the website AcidBase.org. HCO3 levels, AG and strong ion difference (SID) were calculated using the Stewart method, incorporating the parameters of age, serum lactate, glucose, sodium, and pH, etc. According to classic method, the levels of HCO3 and AG were 22.4±7.2 mEq/L and 20.1±4.1 mEq/L respectively. According to Stewart method, the levels of HCO3 and AG were 22.6±7.4 and 19.9±4.5 mEq/L respectively. There was strong correlation between the classic method and the Stewart method for calculating HCO3 and AG. The Stewart method may be more effective in the evaluation of complex metabolic acidosis.

  20. Role of regression analysis and variation of rheological data in calculation of pressure drop for sludge pipelines.

    PubMed

    Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N

    2018-06-15

    Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. A Comparison of Vertical Stiffness Values Calculated from Different Measures of Center of Mass Displacement in Single-Leg Hopping.

    PubMed

    Mudie, Kurt L; Gupta, Amitabh; Green, Simon; Hobara, Hiroaki; Clothier, Peter J

    2017-02-01

    This study assessed the agreement between K vert calculated from 4 different methods of estimating vertical displacement of the center of mass (COM) during single-leg hopping. Healthy participants (N = 38) completed a 10-s single-leg hopping effort on a force plate, with 3D motion of the lower limb, pelvis, and trunk captured. Derived variables were calculated for a total of 753 hop cycles using 4 methods, including: double integration of the vertical ground reaction force, law of falling bodies, a marker cluster on the sacrum, and a segmental analysis method. Bland-Altman plots demonstrated that K vert calculated using segmental analysis and double integration methods have a relatively small bias (0.93 kN⋅m -1 ) and 95% limits of agreement (-1.89 to 3.75 kN⋅m -1 ). In contrast, a greater bias was revealed between sacral marker cluster and segmental analysis (-2.32 kN⋅m -1 ), sacral marker cluster and double integration (-3.25 kN⋅m -1 ), and the law of falling bodies compared with all methods (17.26-20.52 kN⋅m -1 ). These findings suggest the segmental analysis and double integration methods can be used interchangeably for the calculation of K vert during single-leg hopping. The authors propose the segmental analysis method to be considered the gold standard for the calculation of K vert during single-leg, on-the-spot hopping.

  2. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  3. Adaptive intensity modulated radiotherapy for advanced prostate cancer

    NASA Astrophysics Data System (ADS)

    Ludlum, Erica Marie

    The purpose of this research is to develop and evaluate improvements in intensity modulated radiotherapy (IMRT) for concurrent treatment of prostate and pelvic lymph nodes. The first objective is to decrease delivery time while maintaining treatment quality, and evaluate the effectiveness and efficiency of novel one-step optimization compared to conventional two-step optimization. Both planning methods are examined at multiple levels of complexity by comparing the number of beam apertures, or segments, the amount of radiation delivered as measured by monitor units (MUs), and delivery time. One-step optimization is demonstrated to simplify IMRT planning and reduce segments (from 160 to 40), MUs (from 911 to 746), and delivery time (from 22 to 7 min) with comparable plan quality. The second objective is to examine the capability of three commercial dose calculation engines employing different levels of accuracy and efficiency to handle high--Z materials, such as metallic hip prostheses, included in the treatment field. Pencil beam, convolution superposition, and Monte Carlo dose calculation engines are compared by examining the dose differences for patient plans with unilateral and bilateral hip prostheses, and for phantom plans with a metal insert for comparison with film measurements. Convolution superposition and Monte Carlo methods calculate doses that are 1.3% and 34.5% less than the pencil beam method, respectively. Film results demonstrate that Monte Carlo most closely represents actual radiation delivery, but none of the three engines accurately predict the dose distribution when high-Z heterogeneities exist in the treatment fields. The final objective is to improve the accuracy of IMRT delivery by accounting for independent organ motion during concurrent treatment of the prostate and pelvic lymph nodes. A leaf-shifting algorithm is developed to track daily prostate position without requiring online dose calculation. Compared to conventional methods of adjusting patient position, adjusting the multileaf collimator (MLC) leaves associated with the prostate in each segment significantly improves lymph node dose coverage (maintains 45 Gy compared to 42.7, 38.3, and 34.0 Gy for iso-shifts of 0.5, 1 and 1.5 cm). Altering the MLC portal shape is demonstrated as a new and effective solution to independent prostate movement during concurrent treatment.

  4. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    PubMed

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  5. [A computer tomography assisted method for the automatic detection of region of interest in dynamic kidney images].

    PubMed

    Jing, Xueping; Zheng, Xiujuan; Song, Shaoli; Liu, Kai

    2017-12-01

    Glomerular filtration rate (GFR), which can be estimated by Gates method with dynamic kidney single photon emission computed tomography (SPECT) imaging, is a key indicator of renal function. In this paper, an automatic computer tomography (CT)-assisted detection method of kidney region of interest (ROI) is proposed to achieve the objective and accurate GFR calculation. In this method, the CT coronal projection image and the enhanced SPECT synthetic image are firstly generated and registered together. Then, the kidney ROIs are delineated using a modified level set algorithm. Meanwhile, the background ROIs are also obtained based on the kidney ROIs. Finally, the value of GFR is calculated via Gates method. Comparing with the clinical data, the GFR values estimated by the proposed method were consistent with the clinical reports. This automatic method can improve the accuracy and stability of kidney ROI detection for GFR calculation, especially when the kidney function has been severely damaged.

  6. EVALUATION OF THE CARBON FOOTPRINT OF AN INNOVATIVE SEWER REHABILITATION METHOD - abstract

    EPA Science Inventory

    A benefit of trenchless methods touted by many practitioners when compared to open cut construction is lower carbon dioxide emissions. In an attempt to verify these claims, tools have been developed that calculate the environmental impact of traditional open cut methods and commo...

  7. EVALUATION OF THE CARBON FOOTPRINT OF AN INNOVATIVE SEWER REHABILITATION METHOD

    EPA Science Inventory

    A benefit of trenchless methods touted by many practitioners when compared to open cut construction is lower carbon dioxide emissions. In an attempt to verify these claims, tools have been developed that calculate the environmental impact of traditional open cut methods and commo...

  8. Method for computing energy release rate using the elastic work factor approach

    NASA Astrophysics Data System (ADS)

    Rhee, K. Y.; Ernst, H. A.

    1992-01-01

    The elastic work factor eta(el) concept was applied to composite structures for the calculation of total energy release rate by using a single specimen. Cracked lap shear specimens with four different unidirectional fiber orientation were used to examine the dependence of eta(el) on the material properties. Also, three different thickness ratios (lap/strap) were used to determine how geometric conditions affect eta(el). The eta(el) values were calculated in two different ways: compliance method and crack closure method. The results show that the two methods produce comparable eta(el) values and, while eta(el) is affected significantly by geometric conditions, it is reasonably independent of material properties for the given geometry. The results also showed that the elastic work factor can be used to calculate total energy release rate using a single specimen.

  9. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  10. Progesterone and testosterone studies by neutron scattering and nuclear magnetic resonance methods and quantum chemistry calculations

    NASA Astrophysics Data System (ADS)

    Szyczewski, A.; Hołderna-Natkaniec, K.; Natkaniec, I.

    2004-05-01

    Inelastic incoherent neutron scattering spectra of progesterone and testosterone measured at 20 and 290 K were compared with the IR spectra measured at 290 K. The Phonon Density of States spectra display well resolved peaks of low frequency internal vibration modes up to 1200 cm -1. The quantum chemistry calculations were performed by semiempirical PM3 method and by the density functional theory method with different basic sets for isolated molecule, as well as for the dimer system of testosterone. The proposed assignment of internal vibrations of normal modes enable us to conclude about the sequence of the onset of the torsion movements of the CH 3 groups. These conclusions were correlated with the results of proton molecular dynamics studies performed by NMR method. The GAUSSIAN program had been used for calculations.

  11. An opportunity cost approach to sample size calculation in cost-effectiveness analysis.

    PubMed

    Gafni, A; Walter, S D; Birch, S; Sendi, P

    2008-01-01

    The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  12. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  13. How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.

    Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less

  14. Measurement and simulation of thermal neutron flux distribution in the RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  15. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H; Barbee, D; Wang, W

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less

  16. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  17. Eccentricity and misalignment effects on the performance of high-pressure annular seals

    NASA Technical Reports Server (NTRS)

    Chen, W. C.; Jackson, E. D.

    1985-01-01

    Annular pressure seals act as powerful hydrostatic bearings and influence the dynamic characteristics of rotating machinery. This work, using the existing concentric seal theories, provides a simple approximate method for calculation of both seal leakage and the dynamic coefficients for short seals with large eccentricity and/or misalignment of the shaft. Rotation and surface roughness effects are included for leakage and dynamic force calculation. The leakage calculations for both laminar and turbulent flow are compared with experimental results. The dynamic coefficients are compared with analytical results. Excellent agreement between the present work and published results have been observed up to the eccentricitiy ratio of 0.8.

  18. Electron- and positron-impact atomic scattering calculations using propagating exterior complex scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.

    2007-11-01

    Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.

  19. Correlation between hippocampal volumes and medial temporal lobe atrophy in patients with Alzheimer's disease

    PubMed Central

    Dhikav, Vikas; Duraiswamy, Sharmila; Anand, Kuljeet Singh

    2017-01-01

    Introduction: Hippocampus undergoes atrophy in patients with Alzheimer's disease (AD). Calculation of hippocampal volumes can be done by a variety of methods using T1-weighted images of magnetic resonance imaging (MRI) of the brain. Medial temporal lobes atrophy (MTL) can be rated visually using T1-weighted MRI brain images. The present study was done to see if any correlation existed between hippocampal volumes and visual rating scores of the MTL using Scheltens Visual Rating Method. Materials and Methods: We screened 84 subjects presented to the Department of Neurology of a Tertiary Care Hospital and enrolled forty subjects meeting the National Institute of Neurological and Communicative Disorders and Stroke, AD related Disease Association criteria. Selected patients underwent MRI brain and T1-weighted images in a plane perpendicular to long axis of hippocampus were obtained. Hippocampal volumes were calculated manually using a standard protocol. The calculated hippocampal volumes were correlated with Scheltens Visual Rating Method for Rating MTL. A total of 32 cognitively normal age-matched subjects were selected to see the same correlation in the healthy subjects as well. Sensitivity and specificity of both methods was calculated and compared. Results: There was an insignificant correlation between the hippocampal volumes and MTL rating scores in cognitively normal elderly (n = 32; Pearson Correlation coefficient = 0.16, P > 0.05). In the AD Group, there was a moderately strong correlation between measured hippocampal volumes and MTL Rating (Pearson's correlation coefficient = −0.54; P < 0.05. There was a moderately strong correlation between hippocampal volume and Mini-Mental Status Examination in the AD group. Manual delineation was superior compared to the visual method (P < 0.05). Conclusions: Good correlation was present between manual hippocampal volume measurements and MTL scores. Sensitivity and specificity of manual measurement of hippocampus was higher compared to visual rating scores for MTL in patients with AD. PMID:28298839

  20. Quantitative Assessment of Knee Progression Angle During Gait in Children With Cerebral Palsy.

    PubMed

    Davids, Jon R; Cung, Nina Q; Pomeroy, Robin; Schultz, Brooke; Torburn, Leslie; Kulkarni, Vedant A; Brown, Sean; Bagley, Anita M

    2018-04-01

    Abnormal hip rotation is a common deviation in children with cerebral palsy (CP). Clinicians typically assess hip rotation during gait by observing the direction that the patella points relative to the path of walking, which is referred to as the knee progression angle (KPA). Two kinematic methods for calculating the KPA are compared with each other. Video-based qualitative assessment of KPA is compared with the quantitative methods to determine reliability and validity. The KPA was calculated by both direct and indirect methods for 32 typically developing (TD) children and a convenience cohort of 43 children with hemiplegic type CP. An additional convenience cohort of 26 children with hemiplegic type CP was selected for qualitative assessment of KPA, performed by 3 experienced clinicians, using 3 categories (internal, >10 degrees; neutral, -10 to 10 degrees; and external, >-10 degrees). Root mean square (RMS) analysis comparing the direct and indirect KPAs was 1.14+0.43 degrees for TD children, and 1.75+1.54 degrees for the affected side of children with CP. The difference in RMS among the 2 groups was statistically, but not clinically, significant (P=0.019). Intraclass correlation coefficient revealed excellent agreement between the direct and indirect methods of KPA for TD and CP children (0.996 and 0.992, respectively; P<0.001).For the qualitative assessment of KPA there was complete agreement among all examiners for 17 of 26 cases (65%). Direct KPA matched for 49 of 78 observations (63%) and indirect KPA matched for 52 of 78 observations (67%). The RMS analysis of direct and indirect methods for KPA was statistically but not clinically significant, which supports the use of either method based upon availability. Video-based qualitative assessment of KPA showed moderate reliability and validity. The differences between observed and calculated KPA indicate the need for caution when relying on visual assessments for clinical interpretation, and demonstrate the value of adding KPA calculation to standard kinematic analysis. Level II-diagnostic test.

  1. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  2. Aerodynamics Characteristics of Multi-Element Airfoils at -90 Degrees Incidence

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.; Schmitz, Fredric H. (Technical Monitor)

    1994-01-01

    A developed method has been applied to calculate accurately the viscous flow about airfoils normal to the free-stream flow. This method has special application to the analysis of tilt rotor aircraft in the evaluation of download. In particular, the flow about an XV-15 airfoil with and without deflected leading and trailing edge flaps at -90 degrees incidence is evaluated. The multi-element aspect of the method provides for the evaluation of slotted flap configurations which may lead to decreased drag. The method solves for turbulent flow at flight Reynolds numbers. The flow about the XV-15 airfoil with and without flap deflections has been calculated and compared with experimental data at a Reynolds number of one million. The comparison between the calculated and measured pressure distributions are very good, thereby, verifying the method. The aerodynamic evaluation of multielement airfoils will be conducted to determine airfoil/flap configurations for reduced airfoil drag. Comparisons between the calculated lift, drag and pitching moment on the airfoil and the airfoil surface pressure will also be presented.

  3. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.

  4. Pair production in low-energy collisions of uranium nuclei beyond the monopole approximation

    NASA Astrophysics Data System (ADS)

    Maltsev, I. A.; Shabaev, V. M.; Tupitsyn, I. I.; Kozhedub, Y. S.; Plunien, G.; Stöhlker, Th.

    2017-10-01

    A method for calculation of electron-positron pair production in low-energy heavy-ion collisions beyond the monopole approximation is presented. The method is based on the numerical solving of the time-dependent Dirac equation with the full two-center potential. The one-electron wave functions are expanded in the finite basis set constructed on the two-dimensional spatial grid. Employing the developed approach the probabilities of bound-free pair production are calculated for collisions of bare uranium nuclei at the energy near the Coulomb barrier. The obtained results are compared with the corresponding values calculated in the monopole approximation.

  5. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Fei; Zhen, Zhao; Liu, Chun

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  6. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE PAGES

    Wang, Fei; Zhen, Zhao; Liu, Chun; ...

    2017-12-18

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  7. Numerical modeling method on the movement of water flow and suspended solids in two-dimensional sedimentation tanks in the wastewater treatment plant.

    PubMed

    Zeng, Guang-Ming; Jiang, Yi-Min; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing

    2003-01-01

    Taking the distributing calculation of velocity and concentration as an example, the paper established a series of governing equations by the vorticity-stream function method, and dispersed the equations by the finite differencing method. After figuring out the distribution field of velocity, the paper also calculated the concentration distribution in sedimentation tank by using the two-dimensional concentration transport equation. The validity and feasibility of the numerical method was verified through comparing with experimental data. Furthermore, the paper carried out a tentative exploration into the application of numerical simulation of sedimentation tanks.

  8. [Development of an automated processing method to detect coronary motion for coronary magnetic resonance angiography].

    PubMed

    Asou, Hiroya; Imada, N; Sato, T

    2010-06-20

    On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUsmore » and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.« less

  10. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  11. Breast volume assessment: comparing five different techniques.

    PubMed

    Bulstrode, N; Bellamy, E; Shrotria, S

    2001-04-01

    Breast volume assessment is not routinely performed pre-operatively because as yet there is no accepted technique. There have been a variety of methods published, but this is the first study to compare these techniques. We compared volume measurements obtained from mammograms (previously compared to mastectomy specimens) with estimates of volume obtained from four other techniques: thermoplastic moulding, magnetic resonance imaging, Archimedes principle and anatomical measurements. We also assessed the acceptability of each method to the patient. Measurements were performed on 10 women, which produced results for 20 breasts. We were able to calculate regression lines between volume measurements obtained from mammography to the other four methods: (1) magnetic resonance imaging (MRI), 379+(0.75 MRI) [r=0.48], (2) Thermoplastic moulding, 132+(1.46 Thermoplastic moulding) [r=0.82], (3) Anatomical measurements, 168+(1.55 Anatomical measurements) [r=0.83]. (4) Archimedes principle, 359+(0.6 Archimedes principle) [r=0.61] all units in cc. The regression curves for the different techniques are variable and it is difficult to reliably compare results. A standard method of volume measurement should be used when comparing volumes before and after intervention or between individual patients, and it is unreliable to compare volume measurements using different methods. Calculating the breast volume from mammography has previously been compared to mastectomy samples and shown to be reasonably accurate. However we feel thermoplastic moulding shows promise and should be further investigated as it gives not only a volume assessment but a three-dimensional impression of the breast shape, which may be valuable in assessing cosmesis following breast-conserving-surgery.

  12. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  13. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  14. Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics.

    PubMed

    Boedo, J A; Rudakov, D L

    2017-03-01

    We present a method to calculate the ion saturation current, I sat , for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat . It is noted that the I sat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e . We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuously biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and its use in reducing arcs.

  15. Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boedo, J. A.; Rudakov, D. L.

    Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less

  16. Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics

    DOE PAGES

    Boedo, J. A.; Rudakov, D. L.

    2017-03-20

    Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less

  17. Neutral and charged excitations in carbon fullerenes from first-principles many-body theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiago, Murilo L; Kent, Paul R; Hood, Randolph Q.

    2008-01-01

    We use first-principles many-body theories to investigate the low energy excitations of the carbon fullerenes C_20, C_24, C_50, C_60, C_70, and C_80. Properties are calculated via the GW-Bethe-Salpeter Equation (GW-BSE) and diffusion Quantum Monte Carlo (QMC) methods. At a lower level of theoretical complexity, we also calculate these properties using static and time-dependent density-functional theory. We critically compare these theories and assess their accuracy against available experimental data. The first ionization potentials are consistently well reproduced and are similar for all the fullerenes and methods studied. The electron affinities and first triplet excitation energies show substantial method and geometry dependence.more » Compared to available experiment, GW-BSE underestimates excitation energies by approximately 0.3 eV while QMC overestimates them by approximately 0.5 eV. We show the GW-BSE errors result primarily from a systematic overestimation of the electron affinities, while the QMC errors likely result from nodal error in both ground and excited state calculations.« less

  18. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    PubMed Central

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.; Auffermann, William F.; Henry, Travis S.; Khosa, Faisal; Coy, Adam M.; Tridandapani, Srini

    2015-01-01

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:25652511

  19. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.

    2015-02-15

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as wellmore » as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (P{sub AGG}) and IVS (P{sub IV} {sub S}) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (P{sub CT}). The one exception was the RCA, which improved for P{sub AGG} for 18 of the 20 subjects when compared to P{sub CT} (P{sub CT} = 2.48; P{sub AGG} = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.« less

  20. Quantitative assessment of landslide risk in design practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, A.M.; Darevskii, V.E.

    1995-03-01

    Developments of the State Institute for River Transport Protection, which are directed toward practical implementation of an engineering method recommended by regulatory documents for calculation of landslide phenomena, are cited; the potential of operating computer software is demonstrated. Results of calculations are compared with test data, and also with problems solved in the new developments.

  1. The Hildebrand solubility parameters of ionic liquids-part 2.

    PubMed

    Marciniak, Andrzej

    2011-01-01

    The Hildebrand solubility parameters have been calculated for eight ionic liquids. Retention data from the inverse gas chromatography measurements of the activity coefficients at infinite dilution were used for the calculation. From the solubility parameters, the enthalpies of vaporization of ionic liquids were estimated. Results are compared with solubility parameters estimated by different methods.

  2. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  3. DFT analysis on the molecular structure, vibrational and electronic spectra of 2-(cyclohexylamino)ethanesulfonic acid.

    PubMed

    Renuga Devi, T S; Sharmi kumar, J; Ramkumaar, G R

    2015-02-25

    The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm(-1) and 4000-50 cm(-1) respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. (1)H and (13)C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Choosing the best method to estimate the energy density of a population using food purchase data.

    PubMed

    Wrieden, W L; Armstrong, J; Anderson, A S; Sherriff, A; Barton, K L

    2015-04-01

    Energy density (ED) is a measure of the energy content of a food component or diet relative to a standard unit of weight. Widespread variation in ED assessment methodologies exist. The present study aimed to explore the feasibility of calculating the ED of the Scottish diet using UK food purchase survey data and to identify the most appropriate method for calculating ED for use in the development of a Scottish Dietary Goal that captures any socioeconomic differences. Energy density was calculated using five different methods [food; food and milk; food, milk and energy containing (non-alcoholic) beverages; food, milk and all non-alcoholic beverages; and all food and beverages]. ED of the Scottish diet was estimated for each of the ED methods and data were examined by deprivation category. Mean ED varied from 409 to 847 kJ 100 g(-1) depending on the method used. ED values calculated from food (847 kJ 100 g(-1) ) and food and milk (718 kJ 100 g(-1) ) were most comparable to other published data, with the latter being a more accurate reflection of all food consumed. For these two methods, there was a significant gradient between the most and least deprived quintiles (892-807 and 737-696 kJ 100 g(-1) for food and food and milk, respectively). Because the World Cancer Research Fund recommendations are based on ED from food and milk, it was considered prudent to use this method for policy purposes and for future monitoring work of the Scottish Diet to ensure consistency of reporting and comparability with other published studies. © 2014 The British Dietetic Association Ltd.

  5. Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.

    PubMed

    Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B

    2015-01-02

    Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.

  6. Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.

    PubMed

    Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud

    2015-05-17

    Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran.‎ This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS.‎ The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.

  7. Hip joint center localisation: A biomechanical application to hip arthroplasty population

    PubMed Central

    Bouffard, Vicky; Begon, Mickael; Champagne, Annick; Farhadnia, Payam; Vendittoli, Pascal-André; Lavigne, Martin; Prince, François

    2012-01-01

    AIM: To determine hip joint center (HJC) location on hip arthroplasty population comparing predictive and functional approaches with radiographic measurements. METHODS: The distance between the HJC and the mid-pelvis was calculated and compared between the three approaches. The localisation error between the predictive and functional approach was compared using the radiographic measurements as the reference. The operated leg was compared to the non-operated leg. RESULTS: A significant difference was found for the distance between the HJC and the mid-pelvis when comparing the predictive and functional method. The functional method leads to fewer errors. A statistical difference was found for the localization error between the predictive and functional method. The functional method is twice more precise. CONCLUSION: Although being more individualized, the functional method improves HJC localization and should be used in three-dimensional gait analysis. PMID:22919569

  8. Effect Size Calculations and Single Subject Designs

    ERIC Educational Resources Information Center

    Olive, Melissa L.; Smith, Benjamin W.

    2005-01-01

    This study compared visual analyses with five alternative methods for assessing the magnitude of effect with single subject designs. Each method was successful in detecting intervention effect. When rank ordered, each method was consistent in identifying the participants with the largest effect. We recommend the use of the standard mean difference…

  9. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    PubMed

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  10. Structure-activity relationship of the ionic cocrystal: 5-amino-2-naphthalene sulfonate·ammonium ions for pharmaceutical applications

    NASA Astrophysics Data System (ADS)

    Sangeetha, M.; Mathammal, R.

    2018-02-01

    The ionic cocrystals of 5-amino-2-naphthalene sulfonate · ammonium ions (ANSA-ṡNH4+) were grown under slow evaporation method and examined in detail for pharmaceutical applications. The crystal structure and intermolecular interactions were studied from the single X-ray diffraction analysis and the Hirshfeld surfaces. The 2D fingerprint plots displayed the inter-contacts possible in the ionic crystal. Computational DFT method was established to determine the structural, physical and chemical properties. The molecular geometries obtained from the X-ray studies were compared with the optimized geometrical parameters calculated using DFT/6-31 + G(d,p) method. The band gap energy calculated from the UV-Visible spectral analysis and the HOMO-LUMO energy gap are compared. The theoretical UV-Visible calculations helped in determining the type of electronic transition taking place in the title molecule. The maximum absorption bands and transitions involved in the molecule represented the drug reaction possible. Non-linear optical properties were characterized from SHG efficiency measurements experimentally and the NLO parameters are also calculated from the optimized structure. The reactive sites within the molecule are detailed from the MEP surface maps. The molecular docking studies evident the structure-activity of the ionic cocrystal for anti-cancer drug property.

  11. A Calculation Method of Electric Distance and Subarea Division Application Based on Transmission Impedance

    NASA Astrophysics Data System (ADS)

    Fang, G. J.; Bao, H.

    2017-12-01

    The widely used method of calculating electric distances is sensitivity method. The sensitivity matrix is the result of linearization and based on the hypothesis that the active power and reactive power are decoupled, so it is inaccurate. In addition, it calculates the ratio of two partial derivatives as the relationship of two dependent variables, so there is no physical meaning. This paper presents a new method for calculating electrical distance, namely transmission impedance method. It forms power supply paths based on power flow tracing, then establishes generalized branches to calculate transmission impedances. In this paper, the target of power flow tracing is S instead of Q. Q itself has no direction and the grid delivers complex power so that S contains more electrical information than Q. By describing the power transmission relationship of the branch and drawing block diagrams in both forward and reverse directions, it can be found that the numerators of feedback parts of two block diagrams are all the transmission impedances. To ensure the distance is scalar, the absolute value of transmission impedance is defined as electrical distance. Dividing network according to the electric distances and comparing with the results of sensitivity method, it proves that the transmission impedance method can adapt to the dynamic change of system better and reach a reasonable subarea division scheme.

  12. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  14. A new method to measure electron density and effective atomic number using dual-energy CT images

    NASA Astrophysics Data System (ADS)

    Ramos Garcia, Luis Isaac; Pérez Azorin, José Fernando; Almansa, Julio F.

    2016-01-01

    The purpose of this work is to present a new method to extract the electron density ({ρ\\text{e}} ) and the effective atomic number (Z eff) from dual-energy CT images, based on a Karhunen-Loeve expansion (KLE) of the atomic cross section per electron. This method was used to calibrate a Siemens Definition CT using the CIRS phantom. The predicted electron density and effective atomic number using 80 kVp and 140 kVp were compared with a calibration phantom and an independent set of samples. The mean absolute deviations between the theoretical and calculated values for all the samples were 1.7 %  ±  0.1 % for {ρ\\text{e}} and 4.1 %  ±  0.3 % for Z eff. Finally, these results were compared with other stoichiometric method. The application of the KLE to represent the atomic cross section per electron is a promising method for calculating {ρ\\text{e}} and Z eff using dual-energy CT images.

  15. Equation of motion coupled cluster methods for electron attachment and ionization potential in polyacenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Jarrell, Mark

    2015-11-05

    Polyacenes have attracted considerable attention due to their use in organic based optoelectronic materials. Polyacenes are polycyclic aromatic hydrocarbons composed of fused benzene rings. Key to understanding and design of new functional materials is an understanding of their excited state properties starting with their electron affinity (EA) and ionization potential (IP). We have developed a highly accurate and com- putationally e*fficient EA/IP equation of motion coupled cluster singles and doubles (EA/IP-EOMCCSD) method that is capable of treating large systems and large basis set. In this study we employ the EA/IP-EOMCCSD method to calculate the electron affinity and ionization potential ofmore » naphthalene, anthracene, tetracene, pentacene, hex- acene and heptacene. We have compared our results with other previous theoretical studies and experimental data. Our EA/IP results are in very good agreement with experiment and when compared with the other theoretical investigations our results represent the most accurate calculations as compared to experiment.« less

  16. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  17. [Evaluation of a simplified index (spectral entropy) about sleep state of electrocardiogram recorded by a simplified polygraph, MemCalc-Makin2].

    PubMed

    Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi

    2010-02-01

    Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.

  18. Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head

    NASA Astrophysics Data System (ADS)

    Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.

    2016-03-01

    The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.

  19. Probing Actinide Electronic Structure through Pu Cluster Calculations

    DOE PAGES

    Ryzhkov, Mickhail V.; Mirmelstein, Alexei; Yu, Sung-Woo; ...

    2013-02-26

    The calculations for the electronic structure of clusters of plutonium have been performed, within the framework of the relativistic discrete-variational method. Moreover, these theoretical results and those calculated earlier for related systems have been compared to spectroscopic data produced in the experimental investigations of bulk systems, including photoelectron spectroscopy. Observation of the changes in the Pu electronic structure as a function of size provides powerful insight for aspects of bulk Pu electronic structure.

  20. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  1. Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques

    NASA Astrophysics Data System (ADS)

    Canbakan, Axel

    Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.

  2. Trial densities for the extended Thomas-Fermi model

    NASA Astrophysics Data System (ADS)

    Yu, An; Jimin, Hu

    1996-02-01

    A new and simplified form of nuclear densities is proposed for the extended Thomas-Fermi method (ETF) and applied to calculate the ground-state properties of several spherical nuclei, with results comparable or even better than other conventional density profiles. With the expectation value method (EVM) for microscopic corrections we checked our new densities for spherical nuclei. The binding energies of ground states almost reproduce the Hartree-Fock (HF) calculations exactly. Further applications to nuclei far away from the β-stability line are discussed.

  3. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  4. Prediction of solubilities for ginger bioactive compounds in hot water by the COSMO-RS method

    NASA Astrophysics Data System (ADS)

    Zaimah Syed Jaapar, Syaripah; Azian Morad, Noor; Iwai, Yoshio

    2013-04-01

    The solubilities in water of four main ginger bioactives, 6-gingerol, 6-shogaol, 8-gingerol and 10-gingerol, were predicted using a conductor-like screening model for real solvent (COSMO-RS) calculations. This study was conducted since no experimental data are available for ginger bioactive solubilities in hot water. The σ-profiles of these selected molecules were calculated using Gaussian software and the solubilities were calculated using the COSMO-RS method. The solubilities of these ginger bioactives were calculated at 50 to 200 °C. In order to validate the accuracy of the COSMO-RS method, the solubilities of five hydrocarbon molecules were calculated using the COSMO-RS method and compared with the experimental data in the literature. The selected hydrocarbon molecules were 3-pentanone, 1-hexanol, benzene, 3-methylphenol and 2-hydroxy-5-methylbenzaldehyde. The calculated results of the hydrocarbon molecules are in good agreement with the data in the literature. These results confirm that the solubilities of ginger bioactives can be predicted using the COSMO-RS method. The solubilities of the ginger bioactives are lower than 0.0001 at temperatures lower than 130 °C. At 130 to 200 °C, the solubilities increase dramatically with the highest being 6-shogaol, which is 0.00037 mole fraction, and the lowest is 10-gingerol, which is 0.000039 mole fraction at 200 °C.

  5. Spectral analysis of the UFBG-based acousto—optical modulator in V-I transmission matrix formalism

    NASA Astrophysics Data System (ADS)

    Wu, Liang-Ying; Pei, Li; Liu, Chao; Wang, Yi-Qun; Weng, Si-Jun; Wang, Jian-Shuai

    2014-11-01

    In this study, the V-I transmission matrix formalism (V-I method) is proposed to analyze the spectrum characteristics of the uniform fiber Bragg grating (FBG)-based acousto—optic modulators (UFBG-AOM). The simulation results demonstrate that both the amplitude of the acoustically induced strain and the frequency of the acoustic wave (AW) have an effect on the spectrum. Additionally, the wavelength spacing between the primary reflectivity peak and the secondary reflectivity peak is proportional to the acoustic frequency with the ratio 0.1425 nm/MHz. Meanwhile, we compare the amount of calculation. For the FBG whose period is M, the calculation of the V-I method is 4 × (2M-1) in addition/subtraction, 8 × (2M - 1) in multiply/division and 2M in exponent arithmetic, which is almost a quarter of the multi-film method and transfer matrix (TM) method. The detailed analysis indicates that, compared with the conventional multi-film method and transfer matrix (TM) method, the V-I method is faster and less complex.

  6. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    NASA Astrophysics Data System (ADS)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  7. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  8. Towards nonaxisymmetry; initial results using the Flux Coordinate Independent method in BOUT++

    NASA Astrophysics Data System (ADS)

    Shanahan, B. W.; Hill, P.; Dudson, B. D.

    2016-11-01

    Fluid simulation of stellarator edge transport is difficult due to the complexities of mesh generation; the stochastic edge and strong nonaxisymmetry inhibit the use of field aligned coordinate systems. The recent implementation of the Flux Coordinate Independent method for calculating parallel derivatives in BOUT++ has allowed for more complex geometries. Here we present initial results of nonaxisymmetric diffusion modelling as a step towards stellarator turbulence modelling. We then present initial (non-turbulent) transport modelling using the FCI method and compare the results with analytical calculations. The prospects for future stellarator transport and turbulence modelling are discussed.

  9. An analytical method to predict efficiency of aircraft gearboxes

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.; Black, J. D.

    1984-01-01

    A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.

  10. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  11. Renewable energy delivery systems and methods

    DOEpatents

    Walker, Howard Andrew

    2013-12-10

    A system, method and/or apparatus for the delivery of energy at a site, at least a portion of the energy being delivered by at least one or more of a plurality of renewable energy technologies, the system and method including calculating the load required by the site for the period; calculating the amount of renewable energy for the period, including obtaining a capacity and a percentage of the period for the renewable energy to be delivered; comparing the total load to the renewable energy available; and, implementing one or both of additional and alternative renewable energy sources for delivery of energy to the site.

  12. Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J. Kenneth

    2000-10-15

    A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.

  13. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  14. Comparison of the calculation QRS angle for bundle branch block detection

    NASA Astrophysics Data System (ADS)

    Goeirmanto, L.; Mengko, R.; Rajab, T. L.

    2016-04-01

    QRS angle represent condition of blood circulation in the heart. Normally QRS angle is between -30 until 90 degree. Left Axis Defiation (LAD) and Right Axis Defiation (RAD) are abnormality conditions that lead to Bundle Branch Block. QRS angle is calculated using common method from physicians and compared to mathematical method using difference amplitudos and difference areas. We analyzed the standard 12 lead electrocardiogram data from MITBIH physiobank database. All methods using lead I and lead avF produce similar QRS angle and right QRS axis quadrant. QRS angle from mathematical method using difference areas is close to common method from physician. Mathematical method using difference areas can be used as a trigger for detecting heart condition.

  15. New method: calculation of magnification factor from an intracardiac marker.

    PubMed

    Cha, S D; Incarvito, J; Maranhao, V

    1983-01-01

    In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.

  16. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  17. Directly calculated electrical conductivity of hot dense hydrogen from molecular dynamics simulation beyond Kubo-Greenwood formula

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Kang, Dongdong; Zhao, Zengxiu; Dai, Jiayu

    2018-01-01

    Electrical conductivity of hot dense hydrogen is directly calculated by molecular dynamics simulation with a reduced electron force field method, in which the electrons are represented as Gaussian wave packets with fixed sizes. Here, the temperature is higher than electron Fermi temperature ( T > 300 eV , ρ = 40 g / cc ). The present method can avoid the Coulomb catastrophe and give the limit of electrical conductivity based on the Coulomb interaction. We investigate the effect of ion-electron coupled movements, which is lost in the static method such as density functional theory based Kubo-Greenwood framework. It is found that the ionic dynamics, which contributes to the dynamical electrical microfield and electron-ion collisions, will reduce the conductivity significantly compared with the fixed ion configuration calculations.

  18. Calculation of far-field scattering from nonspherical particles using a geometrical optics approach

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1991-01-01

    A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.

  19. Development of a neural network technique for KSTAR Thomson scattering diagnostics.

    PubMed

    Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun

    2016-11-01

    Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.

  20. Correlation matrix renormalization theory for correlated-electron materials with application to the crystalline phases of atomic hydrogen

    DOE PAGES

    Zhao, Xin; Liu, Jun; Yao, Yong-Xin; ...

    2018-01-23

    Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less

  1. Correlation matrix renormalization theory for correlated-electron materials with application to the crystalline phases of atomic hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xin; Liu, Jun; Yao, Yong-Xin

    Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less

  2. Innovative methods for calculation of freeway travel time using limited data : final report.

    DOT National Transportation Integrated Search

    2008-01-01

    Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...

  3. [Interactions of DNA bases with individual water molecules. Molecular mechanics and quantum mechanics computation results vs. experimental data].

    PubMed

    Gonzalez, E; Lino, J; Deriabina, A; Herrera, J N F; Poltev, V I

    2013-01-01

    To elucidate details of the DNA-water interactions we performed the calculations and systemaitic search for minima of interaction energy of the systems consisting of one of DNA bases and one or two water molecules. The results of calculations using two force fields of molecular mechanics (MM) and correlated ab initio method MP2/6-31G(d, p) of quantum mechanics (QM) have been compared with one another and with experimental data. The calculations demonstrated a qualitative agreement between geometry characteristics of the most of local energy minima obtained via different methods. The deepest minima revealed by MM and QM methods correspond to water molecule position between two neighbor hydrophilic centers of the base and to the formation by water molecule of hydrogen bonds with them. Nevertheless, the relative depth of some minima and peculiarities of mutual water-base positions in' these minima depend on the method used. The analysis revealed insignificance of some differences in the results of calculations performed via different methods and the importance of other ones for the description of DNA hydration. The calculations via MM methods enable us to reproduce quantitatively all the experimental data on the enthalpies of complex formation of single water molecule with the set of mono-, di-, and trimethylated bases, as well as on water molecule locations near base hydrophilic atoms in the crystals of DNA duplex fragments, while some of these data cannot be rationalized by QM calculations.

  4. Volume calculation of CT lung lesions based on Halton low-discrepancy sequences

    NASA Astrophysics Data System (ADS)

    Li, Shusheng; Wang, Liansheng; Li, Shuo

    2017-03-01

    Volume calculation from the Computed Tomography (CT) lung lesions data is a significant parameter for clinical diagnosis. The volume is widely used to assess the severity of the lung nodules and track its progression, however, the accuracy and efficiency of previous studies are not well achieved for clinical uses. It remains to be a challenging task due to its tight attachment to the lung wall, inhomogeneous background noises and large variations in sizes and shape. In this paper, we employ Halton low-discrepancy sequences to calculate the volume of the lung lesions. The proposed method directly compute the volume without the procedure of three-dimension (3D) model reconstruction and surface triangulation, which significantly improves the efficiency and reduces the complexity. The main steps of the proposed method are: (1) generate a certain number of random points in each slice using Halton low-discrepancy sequences and calculate the lesion area of each slice through the proportion; (2) obtain the volume by integrating the areas in the sagittal direction. In order to evaluate our proposed method, the experiments were conducted on the sufficient data sets with different size of lung lesions. With the uniform distribution of random points, our proposed method achieves more accurate results compared with other methods, which demonstrates the robustness and accuracy for the volume calculation of CT lung lesions. In addition, our proposed method is easy to follow and can be extensively applied to other applications, e.g., volume calculation of liver tumor, atrial wall aneurysm, etc.

  5. Conformational analysis, spectroscopic study (FT-IR, FT-Raman, UV, 1H and 13C NMR), molecular orbital energy and NLO properties of 5-iodosalicylic acid

    NASA Astrophysics Data System (ADS)

    Karaca, Caglar; Atac, Ahmet; Karabacak, Mehmet

    2015-02-01

    In this study, 5-iodosalicylic acid (5-ISA, C7H5IO3) is structurally characterized by FT-IR, FT-Raman, NMR and UV spectroscopies. There are eight conformers, Cn, n = 1-8 for this molecule therefore the molecular geometry for these eight conformers in the ground state are calculated by using the ab-initio density functional theory (DFT) B3LYP method approach with the aug-cc-pVDZ-PP basis set for iodine and the aug-cc-pVDZ basis set for the other elements. The computational results identified that the most stable conformer of 5-ISA is the C1 form. The vibrational spectra are calculated DFT method invoking the same basis sets and fundamental vibrations are assigned on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method with PQS program. Total density of state (TDOS) and partial density of state (PDOS) and also overlap population density of state (COOP or OPDOS) diagrams analysis for C1 conformer were calculated using the same method. The energy and oscillator strength are calculated by time-dependent density functional theory (TD-DFT) results complement with the experimental findings. Besides, charge transfer occurring in the molecule between HOMO and LUMO energies, frontier energy gap, molecular electrostatic potential (MEP) are calculated and presented. The NMR chemical shifts (1H and 13C) spectra are recorded and calculated using the gauge independent atomic orbital (GIAO) method. Mulliken atomic charges of the title molecule are also calculated, interpreted and compared with salicylic acid. The optimized bond lengths, bond angles and calculated NMR and UV, vibrational wavenumbers showed the best agreement with the experimental results.

  6. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Calculation of astrophysical S-factor and reaction rate in 12C(p, γ)13N reaction

    NASA Astrophysics Data System (ADS)

    Moghadasi, A.; Sadeghi, H.; Pourimani, R.

    2018-02-01

    The 12C(p, γ)13N reaction is the first process in the CNO cycle. Also it is a source of low-energy solar neutrinos in various neutrino experiments. Therefore, it is of high interest to gain data of the astrophysical S-factor in low energies. By applying Faddeev's method, we calculated wave functions for the bound state of 13N. Then the cross sections for resonance and non-resonance were calculated through using Breit-Wigner and direct capture cross section formulae, respectively. After that, we calculated the total S-factor and compared it with previous experimental data, revealing a good agreement altogether. Then, we extrapolated the S-factor in zero energy and the result was 1.32 ± 0.19 (keV.b). In the end, we calculated reaction rate and compared it with NACRE data.

  8. Brain Volume Estimation Enhancement by Morphological Image Processing Tools.

    PubMed

    Zeinali, R; Keshtkar, A; Zamani, A; Gharehaghaji, N

    2017-12-01

    Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI) is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE) was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters). By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.

  9. A Strategy for Assessing Costs of Implementing New Practices in the Child Welfare System: Adapting the English Cost Calculator in the United States

    PubMed Central

    Snowden, Lonnie R.; Padgett, Courtenay; Saldana, Lisa; Roles, Jennifer; Holmes, Lisa; Ward, Harriet; Soper, Jean; Reid, John; Landsverk, John

    2015-01-01

    In decisions to adopt and implement new practices or innovations in child welfare, costs are often a bottom-line consideration. The cost calculator, a method developed in England that can be used to calculate unit costs of core case work activities and associated administrative costs, is described as a potentially helpful tool for assisting child welfare administrators to evaluate the costs of current practices relative to their outcomes and could impact decisions about whether to implement new practices. The process by which the cost calculator is being adapted for use in US child welfare systems in two states is described and an illustration of using the method to compare two intervention approaches is provided. PMID:20976620

  10. A strategy for assessing costs of implementing new practices in the child welfare system: adapting the English cost calculator in the United States.

    PubMed

    Chamberlain, Patricia; Snowden, Lonnie R; Padgett, Courtenay; Saldana, Lisa; Roles, Jennifer; Holmes, Lisa; Ward, Harriet; Soper, Jean; Reid, John; Landsverk, John

    2011-01-01

    In decisions to adopt and implement new practices or innovations in child welfare, costs are often a bottom-line consideration. The cost calculator, a method developed in England that can be used to calculate unit costs of core case work activities and associated administrative costs, is described as a potentially helpful tool for assisting child welfare administrators to evaluate the costs of current practices relative to their outcomes and could impact decisions about whether to implement new practices. The process by which the cost calculator is being adapted for use in US child welfare systems in two states is described and an illustration of using the method to compare two intervention approaches is provided.

  11. Molecular structure and vibrational spectra of Irinotecan: a density functional theoretical study.

    PubMed

    Chinna Babu, P; Sundaraganesan, N; Sudha, S; Aroulmoji, V; Murano, E

    2012-12-01

    The solid phase FTIR and FT-Raman spectra of Irinotecan have been recorded in the regions 400-4000 and 50-4000 cm(-1), respectively. The spectra were interpreted in terms of fundamentals modes, combination and overtone bands. The structure of the molecule was optimized and the structural characteristics were determined by density functional theory (DFT) using B3LYP method with 6-31G(d) as basis set. The vibrational frequencies were calculated for Irinotecan by DFT method and were compared with the experimental frequencies, which yield good agreement between observed and calculated frequencies. The infrared spectrum was also simulated from the calculated intensities. Besides, molecular electrostatic potential (MEP), frontier molecular orbitals (FMO) analysis were investigated using theoretical calculations. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Relativistic many-body calculation of energies, multipole transition rates, and lifetimes in tungsten ions

    NASA Astrophysics Data System (ADS)

    Safronova, U. I.; Safronova, M. S.; Nakamura, N.

    2017-04-01

    Atomic properties of Cd-like W26 +, In-like W25 +, and Sn-like W24 + ions are evaluated by using a relativistic CI+all -order approach that combines configuration-interaction and the coupled-cluster methods. The energies, transition rates, and lifetimes of low-lying levels are calculated and compared with available theoretical and experimental values. The magnetic-dipole transition rates are calculated to determine the branching ratios and lifetimes for the 4 f3 states in W25 + and for the 4 f4 states in W24 + ions. Excellent agreement of the CI+all -order values provided a benchmark test of this method for the 4 fn configurations validating the recommended values of tungsten ion properties calculated in this work.

  13. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  14. The time-resolved photoelectron spectrum of toluene using a perturbation theory approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richings, Gareth W.; Worth, Graham A., E-mail: g.a.worth@bham.ac.uk

    A theoretical study of the intra-molecular vibrational-energy redistribution of toluene using time-resolved photo-electron spectra calculated using nuclear quantum dynamics and a simple, two-mode model is presented. Calculations have been carried out using the multi-configuration time-dependent Hartree method, using three levels of approximation for the calculation of the spectra. The first is a full quantum dynamics simulation with a discretisation of the continuum wavefunction of the ejected electron, whilst the second uses first-order perturbation theory to calculate the wavefunction of the ion. Both methods rely on the explicit inclusion of both the pump and probe laser pulses. The third method includesmore » only the pump pulse and generates the photo-electron spectrum by projection of the pumped wavepacket onto the ion potential energy surface, followed by evaluation of the Fourier transform of the autocorrelation function of the subsequently propagated wavepacket. The calculations performed have been used to study the periodic population flow between the 6a and 10b16b modes in the S{sub 1} excited state, and compared to recent experimental data. We obtain results in excellent agreement with the experiment and note the efficiency of the perturbation method.« less

  15. Practical Calculation of Second-order Supersonic Flow past Nonlifting Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Van Dyke, Milton D

    1952-01-01

    Calculation of second-order supersonic flow past bodies of revolution at zero angle of attack is described in detail, and reduced to routine computation. Use of an approximate tangency condition is shown to increase the accuracy for bodies with corners. Tables of basic functions and standard computing forms are presented. The procedure is summarized so that one can apply it without necessarily understanding the details of the theory. A sample calculation is given, and several examples are compared with solutions calculated by the method of characteristics.

  16. Relativistic scattered wave calculations on UF6

    NASA Technical Reports Server (NTRS)

    Case, D. A.; Yang, C. Y.

    1980-01-01

    Self-consistent Dirac-Slater multiple scattering calculations are presented for UF6. The results are compared critically to other relativistic calculations, showing that the results of all molecular orbital calculations are in qualitative agreement, as measured by energy levels, population analyses, and spin-orbit splittings. A detailed comparison is made to the relativistic X alpha(RX alpha) method of Wood and Boring, which also uses multiple scattering theory, but incorporates relativistic effects in a more approximate fashion. For the most part, the RX alpha results are in agreement with the present results.

  17. Comparing photonic band structure calculation methods for diamond and pyrochlore crystals.

    PubMed

    Vermolen, E C M; Thijssen, J H J; Moroz, A; Megens, M; van Blaaderen, A

    2009-04-27

    The photonic band diagrams of close-packed colloidal diamond and pyrochlore structures, have been studied using Korringa-Kohn-Rostoker (KKR) and plane-wave calculations. In addition, the occurrence of a band gap has been investigated for the binary Laves structures and their constituent large- and small-sphere substructures. It was recently shown that these Laves structures give the possibility to fabricate the diamond and pyrochlore structures by self-organization. The comparison of the two calculation methods opens the possibility to study the validity and the convergence of the results, which have been an issue for diamond-related structures in the past. The KKR calculations systematically give a lower value for the gap width than the plane-wave calculations. This difference can partly be ascribed to a convergence issue in the plane-wave code when a contact point of two spheres coincides with the grid.

  18. Calculation of astrophysical S-factor in reaction ^{13}C(p,γ )^{14}N for first resonance levels

    NASA Astrophysics Data System (ADS)

    Moghadasi, A.; Sadeghi, H.; Pourimani, R.

    2018-01-01

    The ^{13}C(p,γ )^{14}N reaction is one of the important reactions in the CNO cycle, which is a key process in nucleosynthesis. We first calculated wave functions for the bound state of ^{14}N with Faddeev's method. In this method, the considered reaction components are ^{12}C+n+p. Then, by using direct capture cross section and Breit-Wigner formulae, the non-resonant and resonant cross sections were calculated, respectively. In the next step, we calculated the total S-factor and compared it with experimental data, which showed good agreement between them. Next, we extrapolated the S-factor for the transition to the ground state at zero energy and obtained S(0)=5.8 ± 0.7 (keV b) and then calculate reaction rate. These ones are in agreement with previous reported results.

  19. Calculation of flow about two-dimensional bodies by means of the velocity-vorticity formulation on a staggered grid

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1991-01-01

    A method for calculating the incompressible viscous flow about two-dimensional bodies, utilizing the velocity-vorticity form of the Navier-Stokes equations using a staggered-grid formulation is presented. The solution is obtained by employing an alternative-direction implicit method for the solution of the block tridiagonal matrix resulting from the finite-difference representation of the governing equations. The boundary vorticity and the conservation of mass are calculated implicitly as a part of the solution. The mass conservation is calculated to machine zero for the duration of the computation. Calculations for the flow about a circular cylinder, a 2-pct thick flat plate at 90-deg incidence, an elliptic cylinder at 45-deg incidence, and a NACA 0012, with and without a deflected flap, at - 90-deg incidence are performed and compared with the results of other numerical investigations.

  20. Numerical calculation of protein-ligand binding rates through solution of the Smoluchowski equation using smoothed particle hydrodynamics

    DOE PAGES

    Pan, Wenxiao; Daily, Michael; Baker, Nathan A.

    2015-05-07

    Background: The calculation of diffusion-controlled ligand binding rates is important for understanding enzyme mechanisms as well as designing enzyme inhibitors. Methods: We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) BC, is considered on the reactive boundaries. This new BC treatment allows for the analysis of enzymes with “imperfect” reaction rates. Results: The numerical method is first verified in simple systems and thenmore » applied to the calculation of ligand binding to a mouse acetylcholinesterase (mAChE) monomer. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Conclusions: Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less

  1. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  2. Analysis of focusing error signals by differential astigmatic method under off-center tracking in the land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko

    2015-04-01

    We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.

  3. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  4. Exact stochastic unraveling of an optical coherence dynamics by cumulant expansion

    NASA Astrophysics Data System (ADS)

    Olšina, Jan; Kramer, Tobias; Kreisbeck, Christoph; Mančal, Tomáš

    2014-10-01

    A numerically exact Monte Carlo scheme for calculation of open quantum system dynamics is proposed and implemented. The method consists of a Monte Carlo summation of a perturbation expansion in terms of trajectories in Liouville phase-space with respect to the coupling between the excited states of the molecule. The trajectories are weighted by a complex decoherence factor based on the second-order cumulant expansion of the environmental evolution. The method can be used with an arbitrary environment characterized by a general correlation function and arbitrary coupling strength. It is formally exact for harmonic environments, and it can be used with arbitrary temperature. Time evolution of an optically excited Frenkel exciton dimer representing a molecular exciton interacting with a charge transfer state is calculated by the proposed method. We calculate the evolution of the optical coherence elements of the density matrix and linear absorption spectrum, and compare them with the predictions of standard simulation methods.

  5. Comparative evaluation of hemodynamic and respiratory parameters during mechanical ventilation with two tidal volumes calculated by demi-span based height and measured height in normal lungs

    PubMed Central

    Seresht, L. Mousavi; Golparvar, Mohammad; Yaraghi, Ahmad

    2014-01-01

    Background: Appropriate determination of tidal volume (VT) is important for preventing ventilation induced lung injury. We compared hemodynamic and respiratory parameters in two conditions of receiving VTs calculated by using body weight (BW), which was estimated by measured height (HBW) or demi-span based body weight (DBW). Materials and Methods: This controlled-trial was conducted in St. Alzahra Hospital in 2009 on American Society of Anesthesiologists (ASA) I and II, 18-65-years-old patients. Standing height and weight were measured and then height was calculated using demi-span method. BW and VT were calculated with acute respiratory distress syndrome-net formula. Patients were randomized and then crossed to receive ventilation with both calculated VTs for 20 min. Hemodynamic and respiratory parameters were analyzed with SPSS version 20.0 using univariate and multivariate analyses. Results: Forty nine patients were studied. Demi-span based body weight and thus VT (DTV) were lower than Height based body weight and VT (HTV) (P = 0.028), in male patients (P = 0.005). Difference was observed in peak airway pressure (PAP) and airway resistance (AR) changes with higher PAP and AR at 20 min after receiving HTV compared with DTV. Conclusions: Estimated VT based on measured height is higher than that based on demi-span and this difference exists only in females, and this higher VT results higher airway pressures during mechanical ventilation. PMID:24627845

  6. An accurate cost effective DFT approach to study the sensing behaviour of polypyrrole towards nitrate ions in gas and aqueous phases.

    PubMed

    Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid

    2016-07-28

    Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).

  7. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  8. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  9. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{submore » 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose-prescription method employed.« less

  10. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  11. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods

    PubMed Central

    Shah, S. N. R.; Sulong, N. H. Ramli; Shariati, Mahdi; Jumaat, M. Z.

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047

  12. The lifetime risk of maternal mortality: concept and measurement

    PubMed Central

    2009-01-01

    Abstract Objective The lifetime risk of maternal mortality, which describes the cumulative loss of life due to maternal deaths over the female life course, is an important summary measure of population health. However, despite its interpretive appeal, the lifetime risk of dying from maternal causes can be defined and calculated in various ways. A clear and concise discussion of both its underlying concept and methods of measurement is badly needed. Methods I define and compare a variety of procedures for calculating the lifetime risk of maternal mortality. I use detailed survey data from Bangladesh in 2001 to illustrate these calculations and compare the properties of the various risk measures. Using official UN estimates of maternal mortality for 2005, I document the differences in lifetime risk derived with the various measures. Findings Taking sub-Saharan Africa as an example, the range of estimates for the 2005 lifetime risk extends from 3.41% to 5.76%, or from 1 in 29 to 1 in 17. The highest value resulted from the method used for producing official UN estimates for the year 2000. The measure recommended here has an intermediate value of 4.47%, or 1 in 22. Conclusion There are strong reasons to consider the calculation method proposed here more accurate and appropriate than earlier procedures. Accordingly, it was adopted for use in producing the 2005 UN estimates of the lifetime risk of maternal mortality. By comparison, the method used for the 2000 UN estimates appears to overestimate this important measure of population health by around 20%. PMID:19551233

  13. An integral equation method for calculating sound field diffracted by a rigid barrier on an impedance ground.

    PubMed

    Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun

    2015-09-01

    This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.

  14. Calculation of the dielectric properties of semiconductors

    NASA Astrophysics Data System (ADS)

    Engel, G. E.; Farid, Behnam

    1992-12-01

    We report on numerical calculations of the dynamical dielectric function in silicon, using a continued-fraction expansion of the polarizability and a recently proposed representation of the inverse dielectric function in terms of plasmonlike excitations. A number of important technical refinements to further improve the computational efficiency of the method are introduced, making the ab initio calculation of the full energy dependence of the dielectric function comparable in cost to calculation of its static value. Physical results include the observation of previously unresolved features in the random-phase approximated dielectric function and its inverse within the framework of density-functional theory in the local-density approximation, which may be accessible to experiment. We discuss the dispersion of plasmon energies in silicon along the Λ and Δ directions and find improved agreement with experiment compared to earlier calculations. We also present quantitative evidence indicating the degree of violation of the Johnson f-sum rule for the dielectric function due to the nonlocality of the one-electron potential used in the underlying band-structure calculations.

  15. Theoretical and experimental research on laser-beam homogenization based on metal gauze

    NASA Astrophysics Data System (ADS)

    Liu, Libao; Zhang, Shanshan; Wang, Ling; Zhang, Yanchao; Tian, Zhaoshuo

    2018-03-01

    Method of homogenization of CO2 laser heating by means of metal gauze is researched theoretically and experimentally. Distribution of light-field of expanded beam passing through metal gauze was numerically calculated with diffractive optical theory and the conclusion is that method is effective, with comparing the results to the situation without metal gauze. Experimentally, using the 30W DC discharge laser as source and enlarging beam by concave lens, with and without metal gauze, beam intensity distributions in thermal paper were compared, meanwhile the experiments based on thermal imager were performed. The experimental result was compatible with theoretical calculation, and all these show that the homogeneity of CO2 laser heating could be enhanced by metal gauze.

  16. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  17. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  18. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  19. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  20. Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2009-10-01

    Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.

  1. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  2. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations: The Case of the Green Fluorescent Protein.

    PubMed

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob

    2017-12-12

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.

  3. Quasi-heterogeneous efficient 3-D discrete ordinates CANDU calculations using Attila

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preeti, T.; Rulko, R.

    2012-07-01

    In this paper, 3-D quasi-heterogeneous large scale parallel Attila calculations of a generic CANDU test problem consisting of 42 complete fuel channels and a perpendicular to fuel reactivity device are presented. The solution method is that of discrete ordinates SN and the computational model is quasi-heterogeneous, i.e. fuel bundle is partially homogenized into five homogeneous rings consistently with the DRAGON code model used by the industry for the incremental cross-section generation. In calculations, the HELIOS-generated 45 macroscopic cross-sections library was used. This approach to CANDU calculations has the following advantages: 1) it allows detailed bundle (and eventually channel) power calculationsmore » for each fuel ring in a bundle, 2) it allows the exact reactivity device representation for its precise reactivity worth calculation, and 3) it eliminates the need for incremental cross-sections. Our results are compared to the reference Monte Carlo MCNP solution. In addition, the Attila SN method performance in CANDU calculations characterized by significant up scattering is discussed. (authors)« less

  4. SYSTEMS APPROACH TO RECOVERY AND REUSE OF ORGANIC MATERIAL FLOWS IN SANTA BARBARA COUNTY TO EXTRACT MAXIMUM VALUE AND ELIMINATE WASTE

    EPA Science Inventory

    The goal of the project is to calculate the net social, environmental, and economic benefits of a systems approach to organic waste and resource management in Santa Barbara County. To calculate these benefits, a comparative method was chosen of the proposed desi...

  5. The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor

    ERIC Educational Resources Information Center

    Gordon, James; Chancey, Katherine

    2005-01-01

    The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…

  6. The Hildebrand Solubility Parameters of Ionic Liquids—Part 2

    PubMed Central

    Marciniak, Andrzej

    2011-01-01

    The Hildebrand solubility parameters have been calculated for eight ionic liquids. Retention data from the inverse gas chromatography measurements of the activity coefficients at infinite dilution were used for the calculation. From the solubility parameters, the enthalpies of vaporization of ionic liquids were estimated. Results are compared with solubility parameters estimated by different methods. PMID:21747694

  7. Free energy and internal energy of electron-screened plasmas in a modified hypernetted-chain approximation

    NASA Astrophysics Data System (ADS)

    Perrot, F.

    1991-12-01

    We report results of Helmholtz-free-energy and internal-energy calculations using the modified hypernetted-chain (MHNC) equation method, in the formulation of Lado, Foiles, and Ashcroft [Phys. Rev. A 28, 2374 (1983)], for a model plasma of ions linearly screened by electrons. The results are compared with HNC calculations (no Bridge term), with variational calculations using a hard-spheres reference system, and with a numerical fit of Monte Carlo simulations.

  8. A method for estimating mount isolations of powertrain mounting systems

    NASA Astrophysics Data System (ADS)

    Qin, Wu; Shangguan, Wen-Bin; Luo, Guohai; Xie, Zhengchao

    2018-07-01

    A method for calculating isolation ratios of mounts at a powertrain mounting systems (PMS) is proposed assuming a powertrain as a rigid body and using the identified powertrain excitation forces and the measured IPI (input point inertance) of mounting points at the body side. With measured accelerations of mounts at powertrain and body sides of one Vehicle (Vehicle A), the excitation forces of a powertrain are identified using conversational method firstly. Another Vehicle (Vehicle B) has the same powertrain as that of Vehicle A, but with different body and mount configuration. The accelerations of mounts at powertrain side of a PMS on Vehicle B are calculated using the powertrain excitation forces identified from Vehicle A. The identified forces of the powertrain are validated by comparing the calculated and the measured accelerations of mounts at the powertrain side of the powertrain on Vehicle B. A method for calculating acceleration of mounting point at body side for Vehicle B is presented using the identified powertrain excitation forces and the measured IPI at a connecting point between car body and mount. Using the calculated accelerations of mounts at powertrain side and body side at different directions, the isolation ratios of a mount are then estimated. The isolation ratios are validated using the experiment, which verified the proposed methods for estimating isolation ratios of mounts. The developed method is beneficial for optimizing mount stiffness to meet mount isolation requirements before prototype.

  9. Incidence of the muffin-tin approximation on the electronic structure of large clusters calculated by the MS-LSD method: The typical case of C{sub 60}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Razafinjanahary, H.; Rogemond, F.; Chermette, H.

    The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand,more » numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.« less

  10. Sonic boom generated by a slender body aerodynamically shaded by a disk spike

    NASA Astrophysics Data System (ADS)

    Potapkin, A. V.; Moskvichev, D. Yu.

    2018-03-01

    The sonic boom generated by a slender body of revolution aerodynamically shaded by another body is numerically investigated. The aerodynamic shadow is created by a disk placed upstream of the slender body across a supersonic free-stream flow. The disk size and its position upstream of the body are chosen in such a way that the aerodynamically shaded flow is quasi-stationary. A combined method of phantom bodies is used for sonic boom calculations. The method is tested by calculating the sonic boom generated by a blunted body and comparing the results with experimental investigations of the sonic boom generated by spheres of various diameters in ballistic ranges and wind tunnels. The test calculations show that the method of phantom bodies is applicable for calculating far-field parameters of shock waves generated by both slender and blunted bodies. A possibility of reducing the shock wave intensity in the far field by means of the formation of the aerodynamic shadow behind the disk placed upstream of the body is estimated. The calculations are performed for the incoming flow with the Mach number equal to 2. The effect of the disk size on the sonic boom level is calculated.

  11. Relativistic equation-of-motion coupled-cluster method using open-shell reference wavefunction: Application to ionization potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathak, Himadri, E-mail: hmdrpthk@gmail.com; Sasmal, Sudip, E-mail: sudipsasmal.chem@gmail.com; Vaval, Nayana

    2016-08-21

    The open-shell reference relativistic equation-of-motion coupled-cluster method within its four-component description is successfully implemented with the consideration of single- and double- excitation approximations using the Dirac-Coulomb Hamiltonian. At the first attempt, the implemented method is employed to calculate ionization potential value of heavy atomic (Ag, Cs, Au, Fr, and Lr) and molecular (HgH and PbF) systems, where the effect of relativity does really matter to obtain highly accurate results. Not only the relativistic effect but also the effect of electron correlation is crucial in these heavy atomic and molecular systems. To justify the fact, we have taken two further approximationsmore » in the four-component relativistic equation-of-motion framework to quantify how the effect of electron correlation plays a role in the calculated values at different levels of theory. All these calculated results are compared with the available experimental data as well as with other theoretically calculated values to judge the extent of accuracy obtained in our calculations.« less

  12. Development of a program for toric intraocular lens calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position.

    PubMed

    Eom, Youngsub; Ryu, Dongok; Kim, Dae Wook; Yang, Seul Ki; Song, Jong Suk; Kim, Sug-Whan; Kim, Hyo Myung

    2016-10-01

    To evaluate the toric intraocular lens (IOL) calculation considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and effective lens position (ELP). Two thousand samples of corneal parameters with keratometric astigmatism ≥ 1.0 D were obtained using bootstrap methods. The probability distributions for incision-induced keratometric and posterior corneal astigmatisms, as well as ELP were estimated from the literature review. The predicted residual astigmatism error using method D with an IOL add power calculator (IAPC) was compared with those derived using methods A, B, and C through Monte-Carlo simulation. Method A considered the keratometric astigmatism and incision-induced keratometric astigmatism, method B considered posterior corneal astigmatism in addition to the A method, method C considered incision-induced posterior corneal astigmatism in addition to the B method, and method D considered ELP in addition to the C method. To verify the IAPC used in this study, the predicted toric IOL cylinder power and its axis using the IAPC were compared with ray-tracing simulation results. The median magnitude of the predicted residual astigmatism error using method D (0.25 diopters [D]) was smaller than that derived using methods A (0.42 D), B (0.38 D), and C (0.28 D) respectively. Linear regression analysis indicated that the predicted toric IOL cylinder power and its axis had excellent goodness-of-fit between the IAPC and ray-tracing simulation. The IAPC is a simple but accurate method for predicting the toric IOL cylinder power and its axis considering posterior corneal astigmatism, incision-induced posterior corneal astigmatism, and ELP.

  13. Momentum distributions for H 2 ( e , e ' p )

    DOE PAGES

    Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.

    2014-12-29

    [Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less

  14. Glucose absorption in acute peritoneal dialysis.

    PubMed

    Podel, J; Hodelin-Wetzel, R; Saha, D C; Burns, G

    2000-04-01

    During acute peritoneal dialysis (APD), it is known that glucose found in the dialysate solution contributes to the provision of significant calories. It has been well documented in continuous ambulatory peritoneal dialysis (CAPD) that glucose absorption occurs. In APD, however, it remains unclear how much glucose absorption actually does occur. Therefore, the purpose of this study was to determine whether it is appropriate to use the formula used to calculate glucose absorption in CAPD (Grodstein et al) among patients undergoing APD. Actual measurements of glucose absorption (Method I) were calculated in 9 patients undergoing APD treatment for >24 hours who were admitted to the intensive care unit. Glucose absorption using the Grodstein et al formula (Method II) was also determined and compared with the results of actual measurements. The data was then further analyzed based on the factors that influence glucose absorption, specifically dwell time and concentration. The mean total amount of glucose absorbed was 43% +/- 15%. However, when dwell time and concentration were further examined, significant differences were noted. Method I showed a cumulative increase over time. Method II showed that absorption was fixed. This suggests that with the variation in dwell time commonly seen in the acute care setting, the use of Method II may not be accurate. In each of the 2 methods, a significant difference in glucose absorption was noted when comparing the use of 1.5% and 4.25% dialysate concentrations. The established formula designed for CAPD should not be used for calculating glucose absorption in patients receiving APD because variation in dwell time and concentration should be taken into account. Because of the time constraints and staffing required to calculate each exchange individually, combined with the results of the study, we recommend the use of the percentage estimate of 40% to 50%.

  15. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  16. Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)

    PubMed Central

    Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa

    2012-01-01

    Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690

  17. A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.

    PubMed

    Doroudi, Alireza

    2012-01-01

    In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.

  18. Correlation between hippocampal volumes and medial temporal lobe atrophy in patients with Alzheimer's disease.

    PubMed

    Dhikav, Vikas; Duraiswamy, Sharmila; Anand, Kuljeet Singh

    2017-01-01

    Hippocampus undergoes atrophy in patients with Alzheimer's disease (AD). Calculation of hippocampal volumes can be done by a variety of methods using T1-weighted images of magnetic resonance imaging (MRI) of the brain. Medial temporal lobes atrophy (MTL) can be rated visually using T1-weighted MRI brain images. The present study was done to see if any correlation existed between hippocampal volumes and visual rating scores of the MTL using Scheltens Visual Rating Method. We screened 84 subjects presented to the Department of Neurology of a Tertiary Care Hospital and enrolled forty subjects meeting the National Institute of Neurological and Communicative Disorders and Stroke, AD related Disease Association criteria. Selected patients underwent MRI brain and T1-weighted images in a plane perpendicular to long axis of hippocampus were obtained. Hippocampal volumes were calculated manually using a standard protocol. The calculated hippocampal volumes were correlated with Scheltens Visual Rating Method for Rating MTL. A total of 32 cognitively normal age-matched subjects were selected to see the same correlation in the healthy subjects as well. Sensitivity and specificity of both methods was calculated and compared. There was an insignificant correlation between the hippocampal volumes and MTL rating scores in cognitively normal elderly ( n = 32; Pearson Correlation coefficient = 0.16, P > 0.05). In the AD Group, there was a moderately strong correlation between measured hippocampal volumes and MTL Rating (Pearson's correlation coefficient = -0.54; P < 0.05. There was a moderately strong correlation between hippocampal volume and Mini-Mental Status Examination in the AD group. Manual delineation was superior compared to the visual method ( P < 0.05). Good correlation was present between manual hippocampal volume measurements and MTL scores. Sensitivity and specificity of manual measurement of hippocampus was higher compared to visual rating scores for MTL in patients with AD.

  19. Radiative gas dynamics of the Fire-II superorbital space vehicle

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2016-03-01

    The rates of convective and radiative heating of the Fire-II reentry vehicle are calculated, and the results are compared with experimental flight data. The computational model is based on solving a complete set of equations for (i) the radiative gas dynamics of a physically and chemically nonequilibrium viscous heatconducting gas and (ii) radiative transfer in 2D axisymmetric statement. The spectral optical parameters of high-temperature gases are calculated using ab initio quasi-classical and quantum-mechanical methods. The transfer of selective thermal radiation in terms of atomic lines is calculated using the line-by-line method on a specially generated computational grid that is nonuniform in radiation wavelength.

  20. Comparing maximum intercuspal contacts of virtual dental patients and mounted dental casts.

    PubMed

    Delong, Ralph; Ko, Ching-Chang; Anderson, Gary C; Hodges, James S; Douglas, W H

    2002-12-01

    Quantitative measures of occlusal contacts are of paramount importance in the study of chewing dysfunction. A tool is needed to identify and quantify occlusal parameters without occlusal interference caused by the technique of analysis. This laboratory simulation study compared occlusal contacts constructed from 3-dimensional images of dental casts and interocclusal records with contacts found by use of conventional methods. Dental casts of 10 completely dentate adults were mounted in a semi-adjustable Denar articulator. Maximum intercuspal contacts were marked on the casts using red film. Intercuspal records made with an experimental vinyl polysiloxane impression material recorded maximum intercuspation. Three-dimensional virtual models of the casts and interocclusal records were made using custom software and an optical scanner. Contacts were calculated between virtual casts aligned manually (CM), aligned with interocclusal records scanned seated on the mandibular casts (C1) or scanned independently (C2), and directly from virtual interocclusal records (IR). Sensitivity and specificity calculations used the marked contacts as the standard. Contact parameters were compared between method pairs. Statistical comparisons used analysis of variance and the Tukey-Kramer post hoc test (P=<.05). Sensitivities (range 0.76-0.89) did not differ significantly among the 4 methods (P=.14); however, specificities (range 0.89-0.98) were significantly lower for IR (P=.0001). Contact parameters of methods CM, C1, and C2 differed significantly from those of method IR (P<.02). The ranking based on method pair comparisons was C2/C1 > CM/C1 = CM/C2 > C2/IR > CM/IR > C1/IR, where ">" means "closer than." Within the limits of this study, occlusal contacts calculated from aligned virtual casts accurately reproduce articulator contacts.

  1. Validity and reproducibility of a novel method for time-course evaluation of diet-induced thermogenesis in a respiratory chamber.

    PubMed

    Usui, Chiyoko; Ando, Takafumi; Ohkawara, Kazunori; Miyake, Rieko; Oshima, Yoshitake; Hibi, Masanobu; Oishi, Sachiko; Tokuyama, Kumpei; Tanaka, Shigeho

    2015-05-01

    We developed a novel method for computing diet-induced thermogenesis (DIT) in a respiratory chamber and evaluated the validity and reproducibility of the method. We hypothesized that DIT may be calculated as the difference between postprandial energy expenditure (EE) and estimated EE (sum of basal metabolic rate and physical activity (PA)-related EE). The estimated EE was derived from the regression equation between EE from respiration and PA intensity in the fasting state. It may be possible to evaluate the time course of DIT using this novel technique. In a validity study, we examined whether DIT became zero (theoretical value) for 6 h of fasting in 11 subjects. The mean value of DIT calculated by the novel and traditional methods was 22.4 ± 13.4 and 3.4 ± 31.8 kcal/6 h, respectively. In the reproducibility study, 15 adult subjects lived in the respiratory chamber for over 24 h on two occasions. The DIT over 15 h of postprandial wake time was calculated. There were no significant differences in the mean values of DIT between the two test days. The within-subject day-to-day coefficient of variation for calculated DIT with the novel and traditional methods was approximately 35% and 25%, respectively. The novel method did not have superior reproducibility compared with that of the traditional method. However when comparing the smaller variation in the fasting state than the theoretical value (zero), the novel method may be better for evaluating interindividual differences in DIT than the traditional method and also has the ability to evaluate the time-course. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  2. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources.

    PubMed

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-21

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  3. Vibrational spectra, UV and NMR, first order hyperpolarizability and HOMO-LUMO analysis of 2-amino-4-chloro-6-methylpyrimidine.

    PubMed

    Jayavarthanan, T; Sundaraganesan, N; Karabacak, M; Cinar, M; Kurt, M

    2012-11-01

    The solid phase FTIR and FT-Raman spectra of 2-amino-4-chloro-6-methylpyrimidine (2A4Cl6MP) have been recorded in the regions 400-4000 and 50-4,000 cm(-1), respectively. The spectra have been interpreted interms of fundamentals modes, combination and overtone bands. The structure of the molecule has been optimized and the structural characteristics have been determined by density functional theory (B3LYP) method with 6-311++G(d,p) as basis set. The vibrational frequencies were calculated and were compared with the experimental frequencies, which yield good agreement between observed and calculated frequencies. The infrared and Raman spectra have also been predicted from the calculated intensities. (1)H and (13)C NMR spectra were recorded and (1)H and (13)C nuclear magnetic resonance chemical shifts of the molecule were calculated using the gauge independent atomic orbital (GIAO) method. UV-Vis spectrum of the compound was recorded in the region 200-400 nm and the electronic properties HOMO and LUMO energies were measured by time-dependent TD-DFT approach. Nonlinear optical and thermodynamic properties were interpreted. All the calculated results were compared with the available experimental data of the title molecule. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Numerical calculation of protein-ligand binding rates through solution of the Smoluchowski equation using smooth particle hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.

    2015-12-01

    We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated atmore » various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less

  5. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  6. Performance of quantum Monte Carlo for calculating molecular bond lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au

    2016-03-28

    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less

  7. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  8. Semi-analytic valuation of stock loans with finite maturity

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoping; Putri, Endah R. M.

    2015-10-01

    In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.

  9. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  10. Mean-field approximation for spacing distribution functions in classical systems.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society

  11. Calculation of left ventricular volumes and ejection fraction from dynamic cardiac-gated 15O-water PET/CT: 5D-PET.

    PubMed

    Nordström, Jonny; Kero, Tanja; Harms, Hendrik Johannes; Widström, Charles; Flachskampf, Frank A; Sörensen, Jens; Lubberink, Mark

    2017-11-14

    Quantitative measurement of myocardial blood flow (MBF) is of increasing interest in the clinical assessment of patients with suspected coronary artery disease (CAD). 15 O-water positron emission tomography (PET) is considered the gold standard for non-invasive MBF measurements. However, calculation of left ventricular (LV) volumes and ejection fraction (EF) is not possible from standard 15 O-water uptake images. The purpose of the present work was to investigate the possibility of calculating LV volumes and LVEF from cardiac-gated parametric blood volume (V B ) 15 O-water images and from first pass (FP) images. Sixteen patients with mitral or aortic regurgitation underwent an eight-gate dynamic cardiac-gated 15 O-water PET/CT scan and cardiac MRI. V B and FP images were generated for each gate. Calculations of end-systolic volume (ESV), end-diastolic volume (EDV), stroke volume (SV) and LVEF were performed with automatic segmentation of V B and FP images, using commercially available software. LV volumes and LVEF were calculated with surface-, count-, and volume-based methods, and the results were compared with gold standard MRI. Using V B images, high correlations between PET and MRI ESV (r = 0.89, p < 0.001), EDV (r = 0.85, p < 0.001), SV (r = 0.74, p = 0.006) and LVEF (r = 0.72, p = 0.008) were found for the volume-based method. Correlations for FP images were slightly, but not significantly, lower than those for V B images when compared to MRI. Surface- and count-based methods showed no significant difference compared with the volume-based correlations with MRI. The volume-based method showed the best agreement with MRI with no significant difference on average for EDV and LVEF but with an overestimation of values for ESV (14%, p = 0.005) and SV (18%, p = 0.004) when using V B images. Using FP images, none of the parameters showed a significant difference from MRI. Inter-operator repeatability was excellent for all parameters (ICC > 0.86, p < 0.001). Calculation of LV volumes and LVEF from dynamic 15 O-water PET is feasible and shows good correlation with MRI. However, the analysis method is laborious, and future work is needed for more automation to make the method more easily applicable in a clinical setting.

  12. Comparison of the Young-Laplace law and finite element based calculation of ventricular wall stress: implications for postinfarct and surgical ventricular remodeling.

    PubMed

    Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B

    2011-01-01

    Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  13. An approximate method for calculating three-dimensional inviscid hypersonic flow fields

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Dejarnette, Fred R.

    1990-01-01

    An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.

  14. Controlling sign problems in spin models using tensor renormalization

    NASA Astrophysics Data System (ADS)

    Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.; Qin, M. P.; Xiang, T.; Xie, Z. Y.; Yu, J. F.; Zou, Haiyuan

    2014-01-01

    We consider the sign problem for classical spin models at complex β =1/g02 on L ×L lattices. We show that the tensor renormalization group method allows reliable calculations for larger Imβ than the reweighting Monte Carlo method. For the Ising model with complex β we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the tensor renormalization group method. We check the convergence of the tensor renormalization group method for the O(2) model on L×L lattices when the number of states Ds increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.

  15. Predicting the equilibrium solubility of solid polycyclic aromatic hydrocarbons and dibenzothiophene using a combination of MOSCED plus molecular simulation or electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Phifer, Jeremy R.; Cox, Courtney E.; da Silva, Larissa Ferreira; Nogueira, Gabriel Gonçalves; Barbosa, Ana Karolyne Pereira; Ley, Ryan T.; Bozada, Samantha M.; O'Loughlin, Elizabeth J.; Paluch, Andrew S.

    2017-06-01

    Methods to predict the equilibrium solubility of non-electrolyte solids are important for the design of novel separation processes. Here we demonstrate how conventional molecular simulation free energy calculations or electronic structure calculations in a continuum solvent, here SMD or SM8, can be used to predict parameters for the MOdified Separation of Cohesive Energy Density (MOSCED) method. The method is applied to the solutes naphthalene, anthracene, phenanthrene, pyrene and dibenzothiophene, compounds of interested to the petroleum industry and for environmental remediation. Adopting the melting point temperature and enthalpy of fusion of these compounds from experiment, we are able to predict equilibrium solubilities. Comparing to a total of 422 non-aqueous and 193 aqueous experimental solubilities, we find the proposed method is able to well correlate the data. The use of MOSCED is additionally advantageous as it is a solubility parameter-based method useful for intuitive solvent selection and formulation.

  16. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  17. Comparison of RCS prediction techniques, computations and measurements

    NASA Astrophysics Data System (ADS)

    Brand, M. G. E.; Vanewijk, L. J.; Klinker, F.; Schippers, H.

    1992-07-01

    Three calculation methods to predict radar cross sections (RCS) of three dimensional objects are evaluated by computing the radar cross sections of a generic wing inlet configuration. The following methods are applied: a three dimensional high frequency method, a three dimensional boundary element method, and a two dimensional finite difference time domain method. The results of the computations are compared with the data of measurements.

  18. Estimating the Octanol/Water Partition Coefficient for Aliphatic Organic Compounds Using Semi-Empirical Electrotopological Index

    PubMed Central

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945

  19. Estimating the octanol/water partition coefficient for aliphatic organic compounds using semi-empirical electrotopological index.

    PubMed

    Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca

    2011-01-01

    A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.

  20. Hybrid Theory of P-Wave Electron-Hydrogen Elastic Scattering

    NASA Technical Reports Server (NTRS)

    Bhatia, Anand

    2012-01-01

    We report on a study of electron-hydrogen scattering, using a combination of a modified method of polarized orbitals and the optical potential formalism. The calculation is restricted to P waves in the elastic region, where the correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only 35-term correlation function is needed in the wave function compared to the 220-term wave function required in the above-mentioned previous calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts.

  1. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE PAGES

    MacDonald, M. J.; Vorberger, J.; Gamboa, E. J.; ...

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate lattice strains for all initial crystallite orientations, enablingmore » elastic anisotropy and sample texture effects to be modeled directly. Furthermore, the effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  2. Fast calculation of low altitude disturbing gravity for ballistics

    NASA Astrophysics Data System (ADS)

    Wang, Jianqiang; Wang, Fanghao; Tian, Shasha

    2018-03-01

    Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.

  3. New method for estimation of fluence complexity in IMRT fields and correlation with gamma analysis

    NASA Astrophysics Data System (ADS)

    Hanušová, T.; Vondráček, V.; Badraoui-Čuprová, K.; Horáková, I.; Koniarová, I.

    2015-01-01

    A new method for estimation of fluence complexity in Intensity Modulated Radiation Therapy (IMRT) fields is proposed. Unlike other previously published works, it is based on portal images calculated by the Portal Dose Calculation algorithm in Eclipse (version 8.6, Varian Medical Systems) in the plane of the EPID aS500 detector (Varian Medical Systems). Fluence complexity is given by the number and the amplitudes of dose gradients in these matrices. Our method is validated using a set of clinical plans where fluence has been smoothed manually so that each plan has a different level of complexity. Fluence complexity calculated with our tool is in accordance with the different levels of smoothing as well as results of gamma analysis, when calculated and measured dose matrices are compared. Thus, it is possible to estimate plan complexity before carrying out the measurement. If appropriate thresholds are determined which would distinguish between acceptably and overly modulated plans, this might save time in the re-planning and re-measuring process.

  4. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  5. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study

    ERIC Educational Resources Information Center

    Suero, Manuel; Privado, Jesús; Botella, Juan

    2017-01-01

    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  6. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  7. An online supervised learning method based on gradient descent for spiking neurons.

    PubMed

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Association of height, body weight, age, and corneal diameter with calculated intraocular lens strength of adult horses.

    PubMed

    Mouney, Meredith C; Townsend, Wendy M; Moore, George E

    2012-12-01

    To determine whether differences exist in the calculated intraocular lens (IOL) strengths of a population of adult horses and to assess the association between calculated IOL strength and horse height, body weight, and age, and between calculated IOL strength and corneal diameter. 28 clinically normal adult horses (56 eyes). Axial globe lengths and anterior chamber depths were measured ultrasonographically. Corneal curvatures were determined with a modified photokeratometer and brightness-mode ultrasonographic images. Data were used in the Binkhorst equation to calculate the predicted IOL strength for each eye. The calculated IOL strengths were compared with a repeated-measures ANOVA. Corneal curvature values (photokeratometer vs brightness-mode ultrasonographic images) were compared with a paired t test. Coefficients of determination were used to measure associations. Calculated IOL strengths (range, 15.4 to 30.1 diopters) differed significantly among horses. There was a significant difference in the corneal curvatures as determined via the 2 methods. Weak associations were found between calculated IOL strength and horse height and between calculated IOL strength and vertical corneal diameter. Calculated IOL strength differed significantly among horses. Because only weak associations were detected between calculated IOL strength and horse height and vertical corneal diameter, these factors would not serve as reliable indicators for selection of the IOL strength for a specific horse.

  9. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  10. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  11. Chalcogen analogues of nicotine lactam studied by NMR, FTIR, DFT and X-ray methods

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Beata; Malczewska-Jaskóła, Karolina; Kowalczyk, Iwona; Warżajtis, Beata; Rychlewska, Urszula

    2014-07-01

    The selenoanalogue of nicotine has been synthesized and characterized by spectroscopic and X-ray diffraction methods. The crystals of selenonicotine are isomorphic with the thionicotine homologue and consist of molecules engaged in columnar π⋯π stacking interactions between antiparallely arranged pyridine moieties. These interactions, absent in other crystals containing nicotine fragments, seem to be induced by the presence of a lactam group. The molecular structures in the vacuum of the oxo-, thio- and selenonicotine homologues have been calculated by the DFT method and compared with the available X-ray data. The delocalized structure of thionicotine is stabilized by intramolecular Csbnd H⋯S hydrogen bond, which becomes weaker in the partial zwitterionic resonance structure of selenonicotine in favor of multiple Csbnd H⋯Se intermolecular hydrogen-bonds. The calculated data allow a complete assignment of vibration modes in the solid state FTIR spectra. The 1H and 13C NMR chemical shifts were calculated by the GIAO method with B3LYP/6-311G(3df) level. A comparison between experimental and calculated theoretical results indicates that the density functional B3LYP method provided satisfactory results for predicting FTIR, 1H, 13C NMR spectra properties.

  12. Comparison of the performance of different DFT methods in the calculations of the molecular structure and vibration spectra of serotonin (5-hydroxytryptamine, 5-HT)

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Gao, Hongwei

    2012-04-01

    Serotonin (5-hydroxytryptamine, 5-HT) is a monoamine neurotransmitter which plays an important role in treating acute or clinical stress. The comparative performance of different density functional theory (DFT) methods at various basis sets in predicting the molecular structure and vibration spectra of serotonin was reported. The calculation results of different methods including mPW1PW91, HCTH, SVWN, PBEPBE, B3PW91 and B3LYP with various basis sets including LANL2DZ, SDD, LANL2MB, 6-31G, 6-311++G and 6-311+G* were compared with the experimental data. It is remarkable that the SVWN/6-311++G and SVWN/6-311+G* levels afford the best quality to predict the structure of serotonin. The results also indicate that PBEPBE/LANL2DZ level show better performance in the vibration spectra prediction of serotonin than other DFT methods.

  13. Dielectric constant extraction of graphene nanostructured on SiC substrates from spectroscopy ellipsometry measurement using Gauss–Newton inversion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maulina, Hervin; Santoso, Iman, E-mail: iman.santoso@ugm.ac.id; Subama, Emmistasega

    2016-04-19

    The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary partmore » of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.« less

  14. A new potential energy surface of the OH2+ system and state-to-state quantum dynamics studies of the O+ + H2 reaction.

    PubMed

    Li, Wentao; Yuan, Jiuchuang; Yuan, Meiling; Zhang, Yong; Yao, Minghai; Sun, Zhigang

    2018-01-03

    A new global potential energy surface (PES) of the O + + H 2 system was constructed with the permutation invariant polynomial neural network method, using about 63 000 ab initio points, which were calculated by employing the multi-reference configuration interaction method with aug-cc-pVTZ and aug-cc-pVQZ basis sets. For improving the accuracy of the PES, the basis set was extrapolated to the complete basis set limit by the two-point extrapolation method. The root mean square error of fitting was only 5.28 × 10 -3 eV. The spectroscopic constants of the diatomic molecules were calculated and compared with previous theoretical and experimental results, which suggests that the present results agree well with the experiment. On the newly constructed PES, reaction dynamics studies were performed using the time-dependent wave packet method. The calculated integral cross sections (ICSs) were compared with the available theoretical and experimental results, where a good agreement with the experimental data was seen. Significant forward and backward scatterings were observed in the whole collision energy region studied. At the same time, the differential cross sections biased the forward scattering, especially at higher collision energies.

  15. The Living Planet Index: using species population time series to track trends in biodiversity

    PubMed Central

    Loh, Jonathan; Green, Rhys E; Ricketts, Taylor; Lamoreux, John; Jenkins, Martin; Kapos, Valerie; Randers, Jorgen

    2005-01-01

    The Living Planet Index was developed to measure the changing state of the world's biodiversity over time. It uses time-series data to calculate average rates of change in a large number of populations of terrestrial, freshwater and marine vertebrate species. The dataset contains about 3000 population time series for over 1100 species. Two methods of calculating the index are outlined: the chain method and a method based on linear modelling of log-transformed data. The dataset is analysed to compare the relative representation of biogeographic realms, ecoregional biomes, threat status and taxonomic groups among species contributing to the index. The two methods show very similar results: terrestrial species declined on average by 25% from 1970 to 2000. Birds and mammals are over-represented in comparison with other vertebrate classes, and temperate species are over-represented compared with tropical species, but there is little difference in representation between threatened and non-threatened species. Some of the problems arising from over-representation are reduced by the way in which the index is calculated. It may be possible to reduce this further by post-stratification and weighting, but new information would first need to be collected for data-poor classes, realms and biomes. PMID:15814346

  16. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  17. Continuous correction of differential path length factor in near-infrared spectroscopy

    PubMed Central

    Moore, Jason H.; Diamond, Solomon G.

    2013-01-01

    Abstract. In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method. PMID:23640027

  18. Continuous correction of differential path length factor in near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Talukdar, Tanveer; Moore, Jason H.; Diamond, Solomon G.

    2013-05-01

    In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method.

  19. Development of a neural network technique for KSTAR Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seung Hun, E-mail: leesh81@nfri.re.kr; Lee, J. H.; Yamada, I.

    Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ{sup 2} method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ{sup 2} method. The best results were obtained for 10{sup 3} training cyclesmore » and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ{sup 2} method and performs the calculation twenty times faster.« less

  20. Density functional theory calculations of III-N based semiconductors with mBJLDA

    NASA Astrophysics Data System (ADS)

    Gürel, Hikmet Hakan; Akıncı, Özden; Ünlü, Hilmi

    2017-02-01

    In this work, we present first principles calculations based on a full potential linear augmented plane-wave method (FP-LAPW) to calculate structural and electronic properties of III-V based nitrides such as GaN, AlN, InN in a zinc-blende cubic structure. First principles calculation using the local density approximation (LDA) and generalized gradient approximation (GGA) underestimate the band gap. We proposed a new potential called modified Becke-Johnson local density approximation (MBJLDA) that combines modified Becke-Johnson exchange potential and the LDA correlation potential to get better band gap results compared to experiment. We compared various exchange-correlation potentials (LSDA, GGA, HSE, and MBJLDA) to determine band gaps and structural properties of semiconductors. We show that using MBJLDA density potential gives a better agreement with experimental data for band gaps III-V nitrides based semiconductors.

Top