Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.
ERIC Educational Resources Information Center
Jackson, David P.
1996-01-01
Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…
A Simple Method for Calculating Clebsch-Gordan Coefficients
ERIC Educational Resources Information Center
Klink, W. H.; Wickramasekara, S.
2010-01-01
This paper presents a simple method for calculating Clebsch-Gordan coefficients for the tensor product of two unitary irreducible representations (UIRs) of the rotation group. The method also works for multiplicity-free irreducible representations appearing in the tensor product of any number of UIRs of the rotation group. The generalization to…
A simple method for calculating the characteristics of the Dutch roll motion of an airplane
NASA Technical Reports Server (NTRS)
Klawans, Bernard B
1956-01-01
A simple method for calculating the characteristics of the Dutch roll motion of an airplane is obtained by arranging the lateral equations of motion in such form and order that an iterative process is quickly convergent.
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions
ERIC Educational Resources Information Center
McCaffrey, John G.
2009-01-01
A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…
New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.
A simple performance calculation method for LH2/LOX engines with different power cycles
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1973-01-01
A simple method for the calculation of the specific impulse of an engine with a gas generator cycle is presented. The solution is obtained by a power balance between turbine and pump. Approximate equations for the performance of the combustion products of LH2/LOX are derived. Performance results are compared with solutions of different engine types.
Periodicity of microfilariae of human filariasis analysed by a trigonometric method (Aikat and Das).
Tanaka, H
1981-04-01
The microfilarial periodicity of human filariae was characterized statistically by fitting the observed change of microfilaria (mf) counts to the formula of a simple harmonic wave using two parameters, the peak hour (K) and periodicity index (D) (Sasa & Tanaka, 1972, 1974). Later Aikat and Das (1976) proposed a simple calculation method using trigonometry (A-D method) to determine the peak hour (K) and periodicity index (P). All data of microfilarial periodicity analysed previously by the method of Sasa and Tanaka (S-T method) were calculated again by the A-D method in the present study to evaluate the latter method. The results of calculations showed that P was not proportional to D and the ratios of P/D were mostly smaller than expected, especially when P or D was small in less periodic forms. The peak hour calculated by the A-D method did not differ much from that calculated by the S-T method. Goodness of fit was improved slightly by the A-K method in two thirds of analysed data. The classification of human filariae in respect of the type of periodicity was, however, changed little by the results calculated by the A-D method.
NASA Astrophysics Data System (ADS)
Lisienko, V. G.; Malikov, G. K.; Titaev, A. A.
2014-12-01
The paper presents a new simple-to-use expression to calculate the total emissivity of a mixture of gases CO2 and H2O used for modeling heat transfer by radiation in industrial furnaces. The accuracy of this expression is evaluated using the exponential wide band model. It is found that the time taken to calculate the total emissivity in this expression is 1.5 times less than in other approximation methods.
Calculation of Temperature Rise in Calorimetry.
ERIC Educational Resources Information Center
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
A conceptually and computationally simple method for the definition, display, quantification, and comparison of the shapes of three-dimensional mathematical molecular models is presented. Molecular or solvent-accessible volume and surface area can also be calculated. Algorithms, ...
A Recursive Method for Calculating Certain Partition Functions.
ERIC Educational Resources Information Center
Woodrum, Luther; And Others
1978-01-01
Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
ERIC Educational Resources Information Center
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less
Hirarchical emotion calculation model for virtual human modellin - biomed 2010.
Zhao, Yue; Wright, David
2010-01-01
This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.
SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.
Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E
2018-04-10
An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.
A Simple Method to Estimate Photosynthetic Radiation Use Efficiency of Canopies
ROSATI, A.; METCALF, S. G.; LAMPINEN, B. D.
2004-01-01
• Background and Aims Photosynthetic radiation use efficiency (PhRUE) over the course of a day has been shown to be constant for leaves throughout a general canopy where nitrogen content (and thus photosynthetic properties) of leaves is distributed in relation to the light gradient. It has been suggested that this daily PhRUE can be calculated simply from the photosynthetic properties of a leaf at the top of the canopy and from the PAR incident on the canopy, which can be obtained from weather‐station data. The objective of this study was to investigate whether this simple method allows estimation of PhRUE of different crops and with different daily incident PAR, and also during the growing season. • Methods The PhRUE calculated with this simple method was compared with that calculated with a more detailed model, for different days in May, June and July in California, on almond (Prunus dulcis) and walnut (Juglans regia) trees. Daily net photosynthesis of 50 individual leaves was calculated as the daylight integral of the instantaneous photosynthesis. The latter was estimated for each leaf from its photosynthetic response to PAR and from the PAR incident on the leaf during the day. • Key Results Daily photosynthesis of individual leaves of both species was linearly related to the daily PAR incident on the leaves (which implies constant PhRUE throughout the canopy), but the slope (i.e. the PhRUE) differed between the species, over the growing season due to changes in photosynthetic properties of the leaves, and with differences in daily incident PAR. When PhRUE was estimated from the photosynthetic light response curve of a leaf at the top of the canopy and from the incident radiation above the canopy, obtained from weather‐station data, the values were within 5 % of those calculated with the more detailed model, except in five out of 34 cases. • Conclusions The simple method of estimating PhRUE is valuable as it simplifies calculation of canopy photosynthesis to a multiplication between the PAR intercepted by the canopy, which can be obtained with remote sensing, and the PhRUE calculated from incident PAR, obtained from standard weather‐station data, and from the photosynthetic properties of leaves at the top of the canopy. The latter properties are the sole crop parameters needed. While being simple, this method describes the differences in PhRUE related to crop, season, nutrient status and daily incident PAR. PMID:15044212
A non-iterative twin image elimination method with two in-line digital holograms
NASA Astrophysics Data System (ADS)
Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young
2018-02-01
We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
The added mass forces in insect flapping wings.
Liu, Longgui; Sun, Mao
2018-01-21
The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Youn, Younghan; Koo, Jeong-Seo
The complete evaluation of the side vehicle structure and the occupant protection is only possible by means of the full scale side impact crash test. But, auto part manufacturers such as door trim makers can not conduct the test especially when the vehicle is under the developing process. The main objective of this study is to obtain the design guidelines by a simple component level impact test. The relationship between the target absorption energy and impactor speed were examined using the energy absorbed by the door trim. Since each different vehicle type required different energy levels on the door trim. A simple impact test method was developed to estimate abdominal injury by measuring reaction force of the impactor. The reaction force will be converted to a certain level of the energy by the proposed formula. The target of absorption energy for door trim only and the impact speed of simple impactor are derived theoretically based on the conservation of energy. With calculated speed of dummy and the effective mass of abdomen, the energy allocated in the abdomen area of door trim was calculated. The impactor speed can be calculated based on the equivalent energy of door trim absorbed during the full crash test. With the proposed design procedure for the door trim by a simple impact test method was demonstrated to evaluate the abdominal injury. This paper describes a study that was conducted to determine sensitivity of several design factors for reducing abdominal injury values using the matrix of orthogonal array method. In conclusion, with theoretical considerations and empirical test data, the main objective, standardization of door trim design using the simple impact test method was established.
VET Program Completion Rates: An Evaluation of the Current Method. Occasional Paper
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2016
2016-01-01
This work asks one simple question: "how reliable is the method used by the National Centre for Vocational Education Research (NCVER) to estimate projected rates of VET program completion?" In other words, how well do early projections align with actual completion rates some years later? Completion rates are simple to calculate with a…
Coherent Anomaly Method Calculation on the Cluster Variation Method. II.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
[FQA: A method for floristic quality assessment based on conservatism of plant species].
Cao, Li Juan; He, Ping; Wang, Mi; Xui, Jie; Ren, Ying
2018-04-01
FQA, which uses the conservatism of plant species for particular habitats and the species richness of plant communities, is a rapid method for the assessment of habitat quality. This method is based on species composition of quadrats and coefficients of conservatism for species which assigned by experts. Floristic Quality Index (FQI) that reflects vegetation integrity and degradation of a site can be calculated by a simple formula and be used for space-time comparison of habitat quality. It has been widely used in more than ten countries including the United States and Canada. This paper presented the principle, calculation formulas and application cases of this method, with the aim to provide a simple, repeatable and comparable method to assess habitat quality for ecological managers and researchers.
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Methods for estimating 2D cloud size distributions from 1D observations
Romps, David M.; Vogelmann, Andrew M.
2017-08-04
The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less
Methods for estimating 2D cloud size distributions from 1D observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romps, David M.; Vogelmann, Andrew M.
The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less
NASA Astrophysics Data System (ADS)
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, G.A.; Pack, R.T
1978-02-15
A simple, direct derivation of the rotational infinite order sudden (IOS) approximation in molecular scattering theory is given. Connections between simple scattering amplitude formulas, choice of average partial wave parameter, and magnetic transitions are reviewed. Simple procedures for calculating cross sections for specific transitions are discussed and many older model formulas are given clear derivations. Total (summed over rotation) differential, integral, and transport cross sections, useful in the analysis of many experiments involving nonspherical molecules, are shown to be exceedingly simple: They are just averages over the potential angle of cross sections calculated using simple structureless spherical particle formulas andmore » programs. In the case of vibrationally inelastic scattering, the IOSA, without further approximation, provides a well-defined way to get fully three dimensional cross sections from calculations no more difficult than collinear calculations. Integral, differential, viscosity, and diffusion cross sections for He-CO/sub 2/ obtained from the IOSA and a realistic intermolecular potential are calculated as an example and compared with experiment. Agreement is good for the complete potential but poor when only its spherical part is used, so that one should never attempt to treat this system with a spherical model. The simplicity and accuracy of the IOSA make it a viable method for routine analysis of experiments involving collisions of nonspherical molecules.« less
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
A Simple Method for Nucleon-Nucleon Cross Sections in a Nucleus
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Cucinotta, Francis A.; Wilson, John W.
1999-01-01
A simple reliable formalism is presented for obtaining nucleon-nucleon cross sections within a nucleus in nuclear collisions for a given projectile and target nucleus combination at a given energy for use in transport, Monte Carlo, and other calculations. The method relies on extraction of these values from experiments and has been tested and found to give excellent results.
Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori
2018-05-21
In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.
Simple and universal model for electron-impact ionization of complex biomolecules
NASA Astrophysics Data System (ADS)
Tan, Hong Qi; Mi, Zhaohong; Bettiol, Andrew A.
2018-03-01
We present a simple and universal approach to calculate the total ionization cross section (TICS) for electron impact ionization in DNA bases and other biomaterials in the condensed phase. Evaluating the electron impact TICS plays a vital role in ion-beam radiobiology simulation at the cellular level, as secondary electrons are the main cause of DNA damage in particle cancer therapy. Our method is based on extending the dielectric formalism. The calculated results agree well with experimental data and show a good comparison with other theoretical calculations. This method only requires information of the chemical composition and density and an estimate of the mean binding energy to produce reasonably accurate TICS of complex biomolecules. Because of its simplicity and great predictive effectiveness, this method could be helpful in situations where the experimental TICS data are absent or scarce, such as in particle cancer therapy.
Programmable calculator uses equation to figure steady-state gas-pipeline flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmberg, E.
Because it is accurate and consistent over a wide range of variables, the Colebrook-White (C-W) formula serves as the basis for many methods of calculating turbulent flow in gas pipelines. Oilconsult reveals a simple way to adapt the C-W formula to calculate steady-state pipeline flow using the TI-59 programmable calculator.
NASA Astrophysics Data System (ADS)
Kuzenov, V. V.; Ryzhkov, S. V.
2017-02-01
The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
1991-10-01
The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Nakamura, T; Uwamino, Y
1986-02-01
The neutron leakage from medical and industrial electron accelerators has become an important problem and its detection and shielding is being performed in their facilities. This study provides a new simple method of design calculation for neutron shielding of those electron accelerator facilities by dividing into the following five categories; neutron dose distribution in the accelerator room, neutron attenuation through the wall and the door in the accelerator room, neutron and secondary photon dose distributions in the maze, neutron and secondary photon attenuation through the door at the end of the maze, neutron leakage outside the facility-skyshine.
The Nonlinear Dynamic Response of an Elastic-Plastic Thin Plate under Impulsive Loading,
1987-06-11
Among those numerical methods, the finite element method is the most effective one. The method presented in this paper is an " influence function " numerical...computational time is much less than the finite element method. Its precision is higher also. II. Basic Assumption and the Influence Function of a Simple...calculation. Fig. 1 3 2. The Influence function of a Simple Supported Plate The motion differential equation of a thin plate can be written as DV’w+ _.eluq() (1
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utsunomiya, S; Kushima, N; Katsura, K
Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cmmore » × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.« less
Fluctuations of thermodynamic quantities calculated from the fundamental equation of thermodynamics
NASA Astrophysics Data System (ADS)
Yan, Zijun; Chen, Jincan
1992-02-01
On the basis of the probability distribution of the various values of the fluctuation and the fundamental equation of thermodynamics of any given system, a simple and useful method of calculating the fluctuations is presented. By using the method, the fluctuations of thermodynamic quantities can be directly determined from the fundamental equation of thermodynamics. Finally, some examples are given to illustrate the use of the method.
Integral method for the calculation of three-dimensional, laminar and turbulent boundary layers
NASA Technical Reports Server (NTRS)
Stock, H. W.
1978-01-01
The method for turbulent flows is a further development of an existing method; profile families with two parameters and a lag entrainment method replace the simple entrainment method and power profiles with one parameter. The method for laminar flows is a new development. Moment of momentum equations were used for the solution of the problem, the profile families were derived from similar solutions of boundary layer equations. Laminar and turbulent flows at the wings were calculated. The influence of wing tapering on the boundary layer development was shown. The turbulent boundary layer for a revolution ellipsoid is calculated for 0 deg and 10 deg incidence angles.
Numerical study of centrifugal compressor stage vaneless diffusers
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Soldatova, K.; Solovieva, O.
2015-08-01
The authors analyzed CFD calculations of flow in vaneless diffusers with relative width in range from 0.014 to 0.100 at inlet flow angles in range from 100 to 450 with different inlet velocity coefficients, Reynolds numbers and surface roughness. The aim is to simulate calculated performances by simple algebraic equations. The friction coefficient that represents head losses as friction losses is proposed for simulation. The friction coefficient and loss coefficient are directly connected by simple equation. The advantage is that friction coefficient changes comparatively little in range of studied parameters. Simple equations for this coefficient are proposed by the authors. The simulation accuracy is sufficient for practical calculations. To create the complete algebraic model of the vaneless diffuser the authors plan to widen this method of modeling to diffusers with different relative length and for wider range of Reynolds numbers.
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
Calculation and Specification of the Multiple Chirality Displayed by Sugar Pyranoid Ring Structures.
ERIC Educational Resources Information Center
Shallenberger, Robert S.; And Others
1981-01-01
Describes a method, using simple algebraic notation, for calculating the nature of the salient features of a sugar pyranoid ring, the steric disposition of substituents about the reference, and the anomeric carbon atoms contained within the ring. (CS)
Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J
2017-12-01
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
Payback as an investment criterion for sawmill improvement projects
G. B. Harpole
1983-01-01
Methods other than presented here should be used to assess projects for likely return on investment; but, payback is simple to calculate and can be used for calculations that will indicate the relative attractiveness of alternative improvement projects. This paper illustrates how payback ratios are calculated, how they can be used to rank alternative improvement...
Neutron skyshine calculations for the PDX tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, F.J.; Nigg, D.W.
1979-01-01
The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok
A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less
Energy Expansion for the Period of Anharmonic Oscillators by the Method of Lindstedt-Poincare
ERIC Educational Resources Information Center
Fernandez, Francisco M.
2004-01-01
A simple, straightforward and efficient method is proposed for the calculation of the period of anharmonic oscillators as an energy series. The approach is based on perturbation theory and the method of Lindstedt-Poincare.
A Simple Approach for the Calculation of Energy Levels of Light Atoms
ERIC Educational Resources Information Center
Woodyard, Jack R., Sr.
1972-01-01
Describes a method for direct calculation of energy levels by using elementary techniques. Describes the limitations of the approach but also claims that with a minimum amount of labor a student can get greater understanding of atomic physics problems. (PS)
Infinitely dilute partial molar properties of proteins from computer simulation.
Ploetz, Elizabeth A; Smith, Paul E
2014-11-13
A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.
Control of Solar Power Plants Connected Grid with Simple Calculation Method on Residential Homes
NASA Astrophysics Data System (ADS)
Kananda, Kiki; Nazir, Refdinal
2017-12-01
One of the most compatible renewable energy in all regions to apply is solar energy. Solar power plants can be built connected to existing or stand-alone power grids. In assisting the residential electricity in which there is a power grid, then a small scale solar energy power plants is very appropriate. However, the general constraint of solar energy power plants is still low in terms of efficiency. Therefore, this study will explain how to control the power of solar power plants more optimally, which is expected to reactive power to zero to raise efficiency. This is a continuation of previous research using Newton Rapshon control method. In this study we introduce a simple method by using ordinary mathematical calculations of solar-related equations. In this model, 10 PV modules type of ND T060M1 with a 60 Wp capacity are used. The calculations performed using MATLAB Simulink provide excellent value. For PCC voltage values obtained a stable quantity of approximately 220 V. At a maximum irradiation condition of 1000 W / m2, the reactive power value of Q solar generating system maximum 20.48 Var and maximum active power of 417.5 W. In the condition of lower irradiation, value of reactive power Q almost close to zero 0.77Var. This simple mathematical method can provide excellent quality control power values.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
A study on the sensitivity of self-powered neutron detectors (SPNDs)
NASA Astrophysics Data System (ADS)
Lee, Wanno; Cho, Gyuseong; Kim, Kwanghyun; Kim, Hee Joon; choi, Yuseon; Park, Moon Chu; Kim, Soongpyung
2001-08-01
Self-powered neutron detectors (SPNDs) are widely used in reactors to monitor neutron flux, while they have several advantages such as small size, and relatively simple electronics required in conjunction with those usages, they have some intrinsic problems of the low level of output current-a slow response time and the rapid change of sensitivity-that make it difficult to use for a long term. Monte Carlo simulation was used to calculate the escape probability as a function of the birth position of emitted beta particle for geometry of rhodium-based SPNDs. A simple numerical method calculated the initial generation rate of beta particles and the change of generation rate due to rhodium burnup. Using results of the simulation and the simple numerical method, the burnup profile of rhodium number density and the neutron sensitivity were calculated as a function of burnup time in reactors. This method was verified by the comparison of this and other papers, and data of YGN3.4 (Young Gwang Nuclear plant 3, 4) about the initial sensitivity. In addition, for improvement of some properties of rhodium-based SPNDs, which are currently used, a modified geometry is proposed. The proposed geometry, which is tube-type, is able to increase the initial sensitivity due to increase of the escape probability. The escape probability was calculated by changing the thickness of the insulator and compared solid-type with tube-type about each insulator thickness. The method used here can be applied to the analysis and design of other types of SPNDs.
NASA Technical Reports Server (NTRS)
Morduchow, Morris
1955-01-01
A survey of integral methods in laminar-boundary-layer analysis is first given. A simple and sufficiently accurate method for practical purposes of calculating the properties (including stability) of the laminar compressible boundary layer in an axial pressure gradient with heat transfer at the wall is presented. For flow over a flat plate, the method is applicable for an arbitrarily prescribed distribution of temperature along the surface and for any given constant Prandtl number close to unity. For flow in a pressure gradient, the method is based on a Prandtl number of unity and a uniform wall temperature. A simple and accurate method of determining the separation point in a compressible flow with an adverse pressure gradient over a surface at a given uniform wall temperature is developed. The analysis is based on an extension of the Karman-Pohlhausen method to the momentum and the thermal energy equations in conjunction with fourth- and especially higher degree velocity and stagnation-enthalpy profiles.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
Specific cooling capacity of liquid nitrogen
NASA Technical Reports Server (NTRS)
Kilgore, R. A.; Adcock, J. B.
1977-01-01
The assumed cooling process and the method used to calculate the specific cooling capacity of liquid nitrogen are described, and the simple equation fitted to the calculated specific cooling capacity data, together with the graphical form calculated values of the specific cooling capacity of nitrogen for stagnation temperatures from saturation to 350 K and stagnation pressures from 1 to 10 atmospheres, are given.
Program helps quickly calculate deviated well path
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, M.P.
1993-11-22
A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less
Yuan, Jin-Peng; Ji, Zhong-Hua; Zhao, Yan-Ting; Chang, Xue-Fang; Xiao, Lian-Tuan; Jia, Suo-Tang
2013-09-01
We present a simple, reliable, and nondestructive method for the measurement of vacuum pressure in a magneto-optical trap. The vacuum pressure is verified to be proportional to the collision rate constant between cold atoms and the background gas with a coefficient k, which can be calculated by means of the simple ideal gas law. The rate constant for loss due to collisions with all background gases can be derived from the total collision loss rate by a series of loading curves of cold atoms under different trapping laser intensities. The presented method is also applicable for other cold atomic systems and meets the miniaturization requirement of commercial applications.
NASA Astrophysics Data System (ADS)
Karlitasari, L.; Suhartini, D.; Benny
2017-01-01
The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Measuring Plant Water Status: A Simple Method for Investigative Laboratories.
ERIC Educational Resources Information Center
Mansfield, Donald H.; Anderson, Jay E.
1980-01-01
Describes a method suitable for quantitative studies of plant water status conducted by high school or college students and the calculation of the relative water content (RWC) of a plant. Materials, methods, procedures, and results are discussed, with sample data figures provided. (CS)
On the tsunami wave-submerged breakwater interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filianoti, P.; Piscopo, R.
The tsunami wave loads on a submerged rigid breakwater are inertial. It is the result arising from the simple calculation method here proposed, and it is confirmed by the comparison with results obtained by other researchers. The method is based on the estimate of the speed drop of the tsunami wave passing over the breakwater. The calculation is rigorous for a sinusoidal wave interacting with a rigid submerged obstacle, in the framework of the linear wave theory. This new approach gives a useful and simple tool for estimating tsunami loads on submerged breakwaters.An unexpected novelty come out from a workedmore » example: assuming the same wave height, storm waves are more dangerous than tsunami waves, for the safety against sliding of submerged breakwaters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
Symmetry and equivalence restrictions in electronic structure calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Taylor, Peter R.
1988-01-01
A simple method for obtaining MCSCF orbitals and CI natural orbitals adapted to degenerate point groups, with full symmetry and equivalnece restrictions, is described. Among several advantages accruing from this method are the ability to perform atomic SCF calculations on states for which the SCF energy expression cannot be written in terms of Coulomb and exchange integrals over real orbitals, and the generation of symmetry-adapted atomic natural orbitals for use in a recently proposed method for basis set contraction.
NASA Astrophysics Data System (ADS)
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-11-01
Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Simplified refracting technique in keratoconus.
Gasset, A R
1975-01-01
A simple but effective technique to refract keratoconus patients is presented. The theoretical objection to these methods are discussed. In addition, a formula to calculate lenticular astigmatism is presented.
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-11-11
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, C.; Brizuela, H.; Heluani, S. P.
2014-05-21
The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less
A unitary convolution approximation for the impact-parameter dependent electronic energy loss
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
1999-06-01
In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.
NASA Astrophysics Data System (ADS)
Sánchez, H. R.; Pis Diez, R.
2016-04-01
Based on the Aλ diagnostic for multireference effects recently proposed [U.R. Fogueri, S. Kozuch, A. Karton, J.M. Martin, Theor. Chem. Acc. 132 (2013) 1], a simple method for improving total atomization energies and reaction energies calculated at the CCSD level of theory is proposed. The method requires a CCSD calculation and two additional density functional theory calculations for the molecule. Two sets containing 139 and 51 molecules are used as training and validation sets, respectively, for total atomization energies. An appreciable decrease in the mean absolute error from 7-10 kcal mol-1 for CCSD to about 2 kcal mol-1 for the present method is observed. The present method provides atomization energies and reaction energies that compare favorably with relatively recent scaled CCSD methods.
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Improving the treatment of coarse-grain electrostatics: CVCEL.
Ceres, N; Lavery, R
2015-12-28
We propose an analytic approach for calculating the electrostatic energy of proteins or protein complexes in aqueous solution. This method, termed CVCEL (Circular Variance Continuum ELectrostatics), is fitted to Poisson calculations and is able to reproduce the corresponding energies for different choices of solute dielectric constant. CVCEL thus treats both solute charge interactions and charge self-energies, and it can also deal with salt solutions. Electrostatic damping notably depends on the degree of solvent exposure of the charges, quantified here in terms of circular variance, a measure that reflects the vectorial distribution of the neighbors around a given center. CVCEL energies can be calculated rapidly and have simple analytical derivatives. This approach avoids the need for calculating effective atomic volumes or Born radii. After describing how the method was developed, we present test results for coarse-grain proteins of different shapes and sizes, using different internal dielectric constants and different salt concentrations and also compare the results with those from simple distance-dependent models. We also show that the CVCEL approach can be used successfully to calculate the changes in electrostatic energy associated with changes in protein conformation or with protein-protein binding.
Khan, Ajmal; Ballato, Arthur
2002-07-01
Piezoelectric coupling factors for langatate (La3Ga5.5Ta0.5O14) single-crystals driven by lateral-field-excitation have been calculated using the extended Christoffel-Bechmann method. Calculations were made using published materials constants. The results are presented in terms of the lateral piezoelectric coupling factor as functions of in-plane (azimuthal) rotation angle for the three simple thickness vibration modes of some non-rotated, singly-rotated, and doubly-rotated orientations. It is shown that lateral-field-excitation offers the potential to eliminate unwanted vibration modes and to achieve considerably greater piezoelectric coupling versus thickness-field-excitation for the rotated cuts considered and for a doubly-rotated cut that is of potential technological interest.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
NASA Astrophysics Data System (ADS)
Czerny, J.; Schulz, K. G.; Ludwig, A.; Riebesell, U.
2013-03-01
Mesocosms as large experimental units provide the opportunity to perform elemental mass balance calculations, e.g. to derive net biological turnover rates. However, the system is in most cases not closed at the water surface and gases exchange with the atmosphere. Previous attempts to budget carbon pools in mesocosms relied on educated guesses concerning the exchange of CO2 with the atmosphere. Here, we present a simple method for precise determination of air-sea gas exchange in mesocosms using N2O as a deliberate tracer. Beside the application for carbon budgeting, transfer velocities can be used to calculate exchange rates of any gas of known concentration, e.g. to calculate aquatic production rates of climate relevant trace gases. Using an arctic KOSMOS (Kiel Off Shore Mesocosms for future Ocean Simulation) experiment as an exemplary dataset, it is shown that the presented method improves accuracy of carbon budget estimates substantially. Methodology of manipulation, measurement, data processing and conversion to CO2 fluxes are explained. A theoretical discussion of prerequisites for precise gas exchange measurements provides a guideline for the applicability of the method under various experimental conditions.
Pfeiffer, Valentin; Barbeau, Benoit
2014-02-01
Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Application of adjusted data in calculating fission-product decay energies and spectra
NASA Astrophysics Data System (ADS)
George, D. C.; Labauve, R. J.; England, T. R.
1982-06-01
The code ADENA, which approximately calculates fussion-product beta and gamma decay energies and spectra in 19 or fewer energy groups from a mixture of U235 and Pu239 fuels, is described. The calculation uses aggregate, adjusted data derived from a combination of several experiments and summation results based on the ENDF/B-V fission product file. The method used to obtain these adjusted data and the method used by ADENA to calculate fission-product decay energy with an absorption correction are described, and an estimate of the uncertainty of the ADENA results is given. Comparisons of this approximate method are made to experimental measurements, to the ANSI/ANS 5.1-1979 standard, and to other calculational methods. A listing of the complete computer code (ADENA) is contained in an appendix. Included in the listing are data statements containing the adjusted data in the form of parameters to be used in simple analytic functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yongpeng; Northwest Institute of Nuclear Technology, P.O. Box 69-13, Xi'an 710024; Liu Guozhi
In this paper, the Child-Langmuir law and Langmuir-Blodgett law are generalized to the relativistic regime by a simple method. Two classical laws suitable for the nonrelativistic regime are modified to simple approximate expressions applicable for calculating the space-charge-limited currents of one-dimensional steady-state planar diodes and coaxial diodes under the relativistic regime. The simple approximate expressions, extending the Child-Langmuir law and Langmuir-Blodgett law to fit the full range of voltage, have small relative errors less than 1% for one-dimensional planar diodes and less than 5% for coaxial diodes.
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
Symmetry dependence of holograms for optical trapping
NASA Astrophysics Data System (ADS)
Curtis, Jennifer E.; Schmitz, Christian H. J.; Spatz, Joachim P.
2005-08-01
No iterative algorithm is necessary to calculate holograms for most holographic optical trapping patterns. Instead, holograms may be produced by a simple extension of the prisms-and-lenses method. This formulaic approach yields the same diffraction efficiency as iterative algorithms for any asymmetric or symmetric but nonperiodic pattern of points while requiring less calculation time. A slight spatial disordering of periodic patterns significantly reduces intensity variations between the different traps without extra calculation costs. Eliminating laborious hologram calculations should greatly facilitate interactive holographic trapping.
Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon
2015-02-03
Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-01-01
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
Two-band analysis of hole mobility and Hall factor for heavily carbon-doped p-type GaAs
NASA Astrophysics Data System (ADS)
Kim, B. W.; Majerfeld, A.
1996-02-01
We solve a pair of Boltzmann transport equations based on an interacting two-isotropic-band model in a general way first to get transport parameters corresponding to the relaxation time. We present a simple method to calculate effective relaxation times, separately for each band, which compensate for the inherent deficiencies in using the relaxation time concept for polar optical-phonon scattering. Formulas for calculating momentum relaxation times in the two-band model are presented for all the major scattering mechanisms of p-type GaAs for simple, practical mobility calculations. In the newly proposed theoretical framework, first-principles calculations for the Hall mobility and Hall factor of p-type GaAs at room temperature are carried out with no adjustable parameters in order to obtain direct comparisons between the theory and recently available experimental results. In the calculations, the light-hole-band nonparabolicity is taken into account on the average by the use of energy-dependent effective mass obtained from the kṡp method and valence-band anisotropy is taken partly into account by the use the Wiley's overlap function.. The calculated Hall mobilities show a good agreement with our experimental data for carbon-doped p-GaAs samples in the range of degenerate hole densities. The calculated Hall factors show rH=1.25-1.75 over hole densities of 2×1017-1×1020 cm-3.
Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane
2016-09-20
The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.
A simple method of calculating Stirling engines for engine design optimization
NASA Technical Reports Server (NTRS)
Martini, W. R.
1978-01-01
A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.
Transmission eigenchannels from nonequilibrium Green's functions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Brandbyge, Mads
2007-09-01
The concept of transmission eigenchannels is described in a tight-binding nonequilibrium Green’s function (NEGF) framework. A simple procedure for calculating the eigenchannels is derived using only the properties of the device subspace and quantities normally available in a NEGF calculation. The method is exemplified by visualization in real space of the eigenchannels for three different molecular and atomic wires.
ERIC Educational Resources Information Center
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
Wronskian Method for Bound States
ERIC Educational Resources Information Center
Fernandez, Francisco M.
2011-01-01
We propose a simple and straightforward method based on Wronskians for the calculation of bound-state energies and wavefunctions of one-dimensional quantum-mechanical problems. We explicitly discuss the asymptotic behaviour of the wavefunction and show that the allowed energies make the divergent part vanish. As illustrative examples we consider…
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Huafeng; Colabello, Diane M.; Sklute, Elizabeth C.
The absolute absorption coefficient, α(E), is a critical design parameter for devices using semiconductors for light harvesting associated with renewable energy production, both for classic technologies such as photovoltaics and for emerging technologies such as direct solar fuel production. While α(E) is well-known for many classic simple semiconductors used in photovoltaic applications, the absolute values of α(E) are typically unknown for the complex semiconductors being explored for solar fuel production due to the absence of single crystals or crystalline epitaxial films that are needed for conventional methods of determining α(E). In this work, a simple self-referenced method for estimating bothmore » the refractive indices, n(E), and absolute absorption coefficients, α(E), for loose powder samples using diffuse reflectance data is demonstrated. In this method, the sample refractive index can be deduced by refining n to maximize the agreement between the relative absorption spectrum calculated from bidirectional reflectance data (calculated through a Hapke transform which depends on n) and integrating sphere diffuse reflectance data (calculated through a Kubleka–Munk transform which does not depend on n). This new method can be quickly used to screen the suitability of emerging semiconductor systems for light-harvesting applications. The effectiveness of this approach is tested using the simple classic semiconductors Ge and Fe 2O 3 as well as the complex semiconductors La 2MoO 5 and La 4Mo 2O 11. The method is shown to work well for powders with a narrow size distribution (exemplified by Fe 2O 3) and to be ineffective for semiconductors with a broad size distribution (exemplified by Ge). As such, it provides a means for rapidly estimating the absolute optical properties of complex solids which are only available as loose powders.« less
An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.
Ilias, Miroslav; Saue, Trond
2007-02-14
The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.
NASA Astrophysics Data System (ADS)
Jiang, Chao; Qiao, Mingzhong; Zhu, Peng
2017-12-01
A permanent magnet synchronous motor with radial magnetic circuit and built-in permanent magnet is designed for the electric vehicle. Finite element numerical calculation and experimental measurement are adopted to obtain the direct axis and quadrature axis inductance parameters of the motor which are vital important for the motor control. The calculation method is simple, the measuring principle is clear, the results of numerical calculation and experimental measurement are mutual confirmation. A quick and effective method is provided to obtain the direct axis and quadrature axis inductance parameters of the motor, and then improve the design of motor or adjust the control parameters of the motor controller.
Sarma, Manabendra; Adhikari, S; Mishra, Manoj K
2007-01-28
Vibrational excitation (nu(f)<--nu(i)) cross-sections sigma(nu(f)<--nu(i) )(E) in resonant e-N(2) and e-H(2) scattering are calculated from transition matrix elements T(nu(f),nu(i) )(E) obtained using Fourier transform of the cross correlation function
NASA Technical Reports Server (NTRS)
Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.
1995-01-01
The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.
Redox-iodometry: a new potentiometric method.
Gottardi, Waldemar; Pfleiderer, Jörg
2005-07-01
A new iodometric method for quantifying aqueous solutions of iodide-oxidizing and iodine-reducing substances, as well as plain iodine/iodide solutions, is presented. It is based on the redox potential of said solutions after reaction with iodide (or iodine) of known initial concentration. Calibration of the system and calculations of unknown concentrations was performed on the basis of developed algorithms and simple GWBASIC-programs. The method is distinguished by a short analysis time (2-3 min) and a simple instrumentation consisting of pH/mV meter, platinum and reference electrodes. In general the feasible concentration range encompasses 0.1 to 10(-6) mol/L, although it goes down to 10(-8) mol/L (0.001 mg Cl2/L) for oxidants like active chlorine compounds. The calculated imprecision and inaccuracy of the method were found to be 0.4-0.9% and 0.3-0.8%, respectively, resulting in a total error of 0.5-1.2%. Based on the experiments, average imprecisions of 1.0-1.5% at c(Ox)>10(-5) M, 1.5-3% at 10(-5) to 10(-7) M, and 4-7% at <10(-7) M were found. Redox-iodometry is a simple, precise, and time-saving substitute for the more laborious and expensive iodometric titration method, which, like other well-established colorimetric procedures, is clearly outbalanced at low concentrations; this underlines the practical importance of redox-iodometry.
Simple calculation of ab initio melting curves: Application to aluminum.
Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean
2015-03-01
We present a simple, fast, and promising method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of Lin et al [J. Chem. Phys. 119, 11792 (2003)] and its improved version given by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find the thermodynamic conditions of equal Gibbs free energy between these phases, defining the melting curve. The first benchmark on the face-centered cubic melting curve of aluminum from 0 to 300 GPa demonstrates how to obtain an accuracy of 5%-10%, comparable to the most sophisticated methods, for a much lower computational cost.
Caught Ya! A School-Based Practical Activity to Evaluate the Capture-Mark-Release-Recapture Method
ERIC Educational Resources Information Center
Kingsnorth, Crawford; Cruickshank, Chae; Paterson, David; Diston, Stephen
2017-01-01
The capture-mark-release-recapture method provides a simple way to estimate population size. However, when used as part of ecological sampling, this method does not easily allow an opportunity to evaluate the accuracy of the calculation because the actual population size is unknown. Here, we describe a method that can be used to measure the…
Numerical solutions to the time-dependent Bloch equations revisited.
Murase, Kenya; Tanki, Nobuyoshi
2011-01-01
The purpose of this study was to demonstrate a simple and fast method for solving the time-dependent Bloch equations. First, the time-dependent Bloch equations were reduced to a homogeneous linear differential equation, and then a simple equation was derived to solve it using a matrix operation. The validity of this method was investigated by comparing with the analytical solutions in the case of constant radiofrequency irradiation. There was a good agreement between them, indicating the validity of this method. As a further example, this method was applied to the time-dependent Bloch equations in the two-pool exchange model for chemical exchange saturation transfer (CEST) or amide proton transfer (APT) magnetic resonance imaging (MRI), and the Z-spectra and asymmetry spectra were calculated from their solutions. They were also calculated using the fourth/fifth-order Runge-Kutta-Fehlberg (RKF) method for comparison. There was also a good agreement between them, and this method was much faster than the RKF method. In conclusion, this method will be useful for analyzing the complex CEST or APT contrast mechanism and/or investigating the optimal conditions for CEST or APT MRI. Copyright © 2011 Elsevier Inc. All rights reserved.
Room temperature current-voltage (I-V) characteristics of Ag/InGaN/n-Si Schottky barrier diode
NASA Astrophysics Data System (ADS)
Erdoğan, Erman; Kundakçı, Mutlu
2017-02-01
Metal-semiconductors (MSs) or Schottky barrier diodes (SBDs) have a significant potential in the integrated device technology. In the present paper, electrical characterization of Ag/InGaN/n-Si Schottky diode have been systematically carried out by simple Thermionic method (TE) and Norde function based on the I-V characteristics. Ag ohmic and schottky contacts are deposited on InGaN/n-Si film by thermal evaporation technique under a vacuum pressure of 1×10-5 mbar. Ideality factor, barrier height and series resistance values of this diode are determined from I-V curve. These parameters are calculated by TE and Norde methods and findings are given in a comparetive manner. The results show the consistency for both method and also good agreement with other results obtained in the literature. The value of ideality factor and barrier height have been determined to be 2.84 and 0.78 eV at room temperature using simple TE method. The value of barrier height obtained with Norde method is calculated as 0.79 eV.
Simple Levelized Cost of Energy (LCOE) Calculator Documentation | Energy
Analysis | NREL Simple Levelized Cost of Energy (LCOE) Calculator Documentation Simple Levelized Cost of Energy (LCOE) Calculator Documentation Transparent Cost Database Button This is a simple : 1). Cost and Performance Adjust the sliders to suitable values for each of the cost and performance
Determination of the transmission coefficients for quantum structures using FDTD method.
Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan
2011-12-01
The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.
Summertime Temperatures in Buildings Without Air-Conditioning.
ERIC Educational Resources Information Center
Loudon, A. G.
Many modern buildings become uncomfortably warm during sunny spells in the summer, and until recently there was no simple, reliable method of assessing at the design stage whether a building would become overheated. This paper describes a method of calculating summertime temperatures which was developed at the Building Research Station, and gives…
Zombie states for description of structure and dynamics of multi-electron systems
NASA Astrophysics Data System (ADS)
Shalashilin, Dmitrii V.
2018-05-01
Canonical Coherent States (CSs) of Harmonic Oscillator have been extensively used as a basis in a number of computational methods of quantum dynamics. However, generalising such techniques for fermionic systems is difficult because Fermionic Coherent States (FCSs) require complicated algebra of Grassmann numbers not well suited for numerical calculations. This paper introduces a coherent antisymmetrised superposition of "dead" and "alive" electronic states called here Zombie State (ZS), which can be used in a manner of FCSs but without Grassmann algebra. Instead, for Zombie States, a very simple sign-changing rule is used in the definition of creation and annihilation operators. Then, calculation of electronic structure Hamiltonian matrix elements between two ZSs becomes very simple and a straightforward technique for time propagation of fermionic wave functions can be developed. By analogy with the existing methods based on Canonical Coherent States of Harmonic Oscillator, fermionic wave functions can be propagated using a set of randomly selected Zombie States as a basis. As a proof of principles, the proposed Coupled Zombie States approach is tested on a simple example showing that the technique is exact.
Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method
ERIC Educational Resources Information Center
Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.
2013-01-01
Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…
Elcock, Adrian H.
2013-01-01
Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
An evaluation of rise time characterization and prediction methods
NASA Technical Reports Server (NTRS)
Robinson, Leick D.
1994-01-01
One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.
Exact special twist method for quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro
2016-12-01
We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V
2010-12-15
We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).
Elastic and viscoelastic calculations of stresses in sedimentary basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less
A simple node and conductor data generator for SINDA
NASA Technical Reports Server (NTRS)
Gottula, Ronald R.
1992-01-01
This paper presents a simple, automated method to generate NODE and CONDUCTOR DATA for thermal match modes. The method uses personal computer spreadsheets to create SINDA inputs. It was developed in order to make SINDA modeling less time consuming and serves as an alternative to graphical methods. Anyone having some experience using a personal computer can easily implement this process. The user develops spreadsheets to automatically calculate capacitances and conductances based on material properties and dimensional data. The necessary node and conductor information is then taken from the spreadsheets and automatically arranged into the proper format, ready for insertion directly into the SINDA model. This technique provides a number of benefits to the SINDA user such as a reduction in the number of hand calculations, and an ability to very quickly generate a parametric set of NODE and CONDUCTOR DATA blocks. It also provides advantages over graphical thermal modeling systems by retaining the analyst's complete visibility into the thermal network, and by permitting user comments anywhere within the DATA blocks.
Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile
NASA Astrophysics Data System (ADS)
Halverson, Thomas; Poirier, Bill
2015-03-01
'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.
Molecular simulation of simple fluids and polymers in nanoconfinement
NASA Astrophysics Data System (ADS)
Rasmussen, Christopher John
Prediction of phase behavior and transport properties of simple fluids and polymers confined to nanoscale pores is important to a wide range of chemical and biochemical engineering processes. A practical approach to investigate nanoscale systems is molecular simulation, specifically Monte Carlo (MC) methods. One of the most challenging problems is the need to calculate chemical potentials in simulated phases. Through the seminal work of Widom, practitioners have a powerful method for calculating chemical potentials. Yet, this method fails for dense and inhomogeneous systems, as well as for complex molecules such as polymers. In this dissertation, the gauge cell MC method, which had previously been successfully applied to confined simple fluids, was employed and extended to investigate nanoscale fluids in several key areas. Firstly, the process of cavitation (the formation and growth of bubbles) during desorption of fluids from nanopores was investigated. The dependence of cavitation pressure on pore size was determined with gauge cell MC calculations of the nucleation barriers correlated with experimental data. Additional computational studies elucidated the role of surface defects and pore connectivity in the formation of cavitation bubbles. Secondly, the gauge cell method was extended to polymers. The method was verified against the literature results and found significantly more efficient. It was used to examine adsorption of polymers in nanopores. These results were applied to model the dynamics of translocation, the act of a polymer threading through a small opening, which is implicated in drug packaging and delivery, and DNA sequencing. Translocation dynamics was studied as diffusion along the free energy landscape. Thirdly, we show how computer simulation of polymer adsorption could shed light on the specifics of polymer chromatography, which is a key tool for the analysis and purification of polymers. The quality of separation depends on the physico-chemical mechanisms of polymer/pore interaction. We considered liquid chromatography at critical conditions, and calculated the dependence of the partition coefficient on chain length. Finally, solvent-gradient chromatography was modeled using a statistical model of polymer adsorption. A model for predicting separation of complex polymers (with functional groups or copolymers) was developed for practical use in chromatographic separations.
Ab initio excited states from the in-medium similarity renormalization group
NASA Astrophysics Data System (ADS)
Parzuchowski, N. M.; Morris, T. D.; Bogner, S. K.
2017-04-01
We present two new methods for performing ab initio calculations of excited states for closed-shell systems within the in-medium similarity renormalization group (IMSRG) framework. Both are based on combining the IMSRG with simple many-body methods commonly used to target excited states, such as the Tamm-Dancoff approximation (TDA) and equations-of-motion (EOM) techniques. In the first approach, a two-step sequential IMSRG transformation is used to drive the Hamiltonian to a form where a simple TDA calculation (i.e., diagonalization in the space of 1 p 1 h excitations) becomes exact for a subset of eigenvalues. In the second approach, EOM techniques are applied to the IMSRG ground-state-decoupled Hamiltonian to access excited states. We perform proof-of-principle calculations for parabolic quantum dots in two dimensions and the closed-shell nuclei 16O and 22O. We find that the TDA-IMSRG approach gives better accuracy than the EOM-IMSRG when calculations converge, but it is otherwise lacking the versatility and numerical stability of the latter. Our calculated spectra are in reasonable agreement with analogous EOM-coupled-cluster calculations. This work paves the way for more interesting applications of the EOM-IMSRG approach to calculations of consistently evolved observables such as electromagnetic strength functions and nuclear matrix elements, and extensions to nuclei within one or two nucleons of a closed shell by generalizing the EOM ladder operator to include particle-number nonconserving terms.
Hardness of H13 Tool Steel After Non-isothermal Tempering
NASA Astrophysics Data System (ADS)
Nelson, E.; Kohli, A.; Poirier, D. R.
2018-04-01
A direct method to calculate the tempering response of a tool steel (H13) that exhibits secondary hardening is presented. Based on the traditional method of presenting tempering response in terms of isothermal tempering, we show that the tempering response for a steel undergoing a non-isothermal tempering schedule can be predicted. Experiments comprised (1) isothermal tempering, (2) non-isothermal tempering pertaining to a relatively slow heating to process-temperature and (3) fast-heating cycles that are relevant to tempering by induction heating. After establishing the tempering response of the steel under simple isothermal conditions, the tempering response can be applied to non-isothermal tempering by using a numerical method to calculate the tempering parameter. Calculated results are verified by the experiments.
Quasi solution of radiation transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
There is uncertainty with experimental data as well as with input data of theoretical calculations. The neutron distribution from the variational principle, which takes into account both theoretical and experimental data, is obtained to increase the accuracy and speed of neutronic calculations. The neutron imbalance in mesh cells and the discrepancy between experimentally measured and calculated functional of the neutron distribution are simultaneously minimized. A fast-working and simple-programming iteration method is developed to minimize the objective functional. The method can be used in the core monitoring and control system for (a) power distribution calculations, (b) in- and ex-core detector calibration,more » (c) macro-cross sections or isotope distribution correction by experimental data, and (d) core and detector diagnostics.« less
A Simple Method to Determine the Refractive Index of Glass.
ERIC Educational Resources Information Center
Mak, Se-yuen
1988-01-01
Describes an experiment for determining the refractive index. Discusses the experiment procedure and mathematical expression for calculating the index. Provides two geometrical diagrams and a graph for determining the index with a typical data. (YP)
Computational assignment of redox states to Coulomb blockade diamonds.
Olsen, Stine T; Arcisauskaite, Vaida; Hansen, Thorsten; Kongsted, Jacob; Mikkelsen, Kurt V
2014-09-07
With the advent of molecular transistors, electrochemistry can now be studied at the single-molecule level. Experimentally, the redox chemistry of the molecule manifests itself as features in the observed Coulomb blockade diamonds. We present a simple theoretical method for explicit construction of the Coulomb blockade diamonds of a molecule. A combined quantum mechanical/molecular mechanical method is invoked to calculate redox energies and polarizabilities of the molecules, including the screening effect of the metal leads. This direct approach circumvents the need for explicit modelling of the gate electrode. From the calculated parameters the Coulomb blockade diamonds are constructed using simple theory. We offer a theoretical tool for assignment of Coulomb blockade diamonds to specific redox states in particular, and a study of chemical details in the diamonds in general. With the ongoing experimental developments in molecular transistor experiments, our tool could find use in molecular electronics, electrochemistry, and electrocatalysis.
A simple formula for estimating Stark widths of neutral lines. [of stellar atmospheres
NASA Technical Reports Server (NTRS)
Freudenstein, S. A.; Cooper, J.
1978-01-01
A simple formula for the prediction of Stark widths of neutral lines similar to the semiempirical method of Griem (1968) for ion lines is presented. This formula is a simplification of the quantum-mechanical classical path impact theory and can be used for complicated atoms for which detailed calculations are not readily available, provided that the effective position of the closest interacting level is known. The expression does not require the use of a computer. The formula has been applied to a limited number of neutral lines of interest, and the width obtained is compared with the much more complete calculations of Bennett and Griem (1971). The agreement generally is well within 50% of the published value for the lines investigated. Comparisons with other formulas are also made. In addition, a simple estimate for the ion-broadening parameter is given.
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
Calculation of the bending stresses in helicopter rotor blades
NASA Technical Reports Server (NTRS)
De Guillenchmidt, P
1951-01-01
A comparatively rapid method is presented for determining theoretically the bending stresses of helicopter rotor blades in forward flight. The method is based on the analysis of the properties of a vibrating beam, and its uniqueness lies in the simple solution of the differential equation which governs the motion of the bent blades.
Spin-1 Heisenberg ferromagnet using pair approximation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mert, Murat; Mert, Gülistan; Kılıç, Ahmet
2016-06-08
Thermodynamic properties for Heisenberg ferromagnet with spin-1 on the simple cubic lattice have been calculated using pair approximation method. We introduce the single-ion anisotropy and the next-nearest-neighbor exchange interaction. We found that for negative single-ion anisotropy parameter, the internal energy is positive and heat capacity has two peaks.
A Simple Method to Control Positive Baseline Trend within Data Nonoverlap
ERIC Educational Resources Information Center
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.
2014-01-01
Nonoverlap is widely used as a statistical summary of data; however, these analyses rarely correct unwanted positive baseline trend. This article presents and validates the graph rotation for overlap and trend (GROT) technique, a hand calculation method for controlling positive baseline trend within an analysis of data nonoverlap. GROT is…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com
We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less
NASA Astrophysics Data System (ADS)
Dias, L. G.; Shimizu, K.; Farah, J. P. S.; Chaimovich, H.
2002-09-01
We propose and demonstrate the usefulness of a method, defined as generalized Born electronegativity equalization method (GBEEM) to estimate solvent-induced charge redistribution. The charges obtained by GBEEM, in a representative series of small organic molecules, were compared to PM3-CM1 charges in vacuum and in water. Linear regressions with appropriate correlation coefficients and standard deviations between GBEEM and PM3-CM1 methods were obtained ( R=0.94,SD=0.15, Ftest=234, N=32, in vacuum; R=0.94,SD=0.16, Ftest=218, N=29, in water). In order to test the GBEEM response when intermolecular interactions are involved we calculated a water dimer in dielectric water using both GBEEM and PM3-CM1 and the results were similar. Hence, the method developed here is comparable to established calculation methods.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
Huang, Huafeng; Colabello, Diane M.; Sklute, Elizabeth C.; ...
2017-04-23
The absolute absorption coefficient, α(E), is a critical design parameter for devices using semiconductors for light harvesting associated with renewable energy production, both for classic technologies such as photovoltaics and for emerging technologies such as direct solar fuel production. While α(E) is well-known for many classic simple semiconductors used in photovoltaic applications, the absolute values of α(E) are typically unknown for the complex semiconductors being explored for solar fuel production due to the absence of single crystals or crystalline epitaxial films that are needed for conventional methods of determining α(E). In this work, a simple self-referenced method for estimating bothmore » the refractive indices, n(E), and absolute absorption coefficients, α(E), for loose powder samples using diffuse reflectance data is demonstrated. In this method, the sample refractive index can be deduced by refining n to maximize the agreement between the relative absorption spectrum calculated from bidirectional reflectance data (calculated through a Hapke transform which depends on n) and integrating sphere diffuse reflectance data (calculated through a Kubleka–Munk transform which does not depend on n). This new method can be quickly used to screen the suitability of emerging semiconductor systems for light-harvesting applications. The effectiveness of this approach is tested using the simple classic semiconductors Ge and Fe 2O 3 as well as the complex semiconductors La 2MoO 5 and La 4Mo 2O 11. The method is shown to work well for powders with a narrow size distribution (exemplified by Fe 2O 3) and to be ineffective for semiconductors with a broad size distribution (exemplified by Ge). As such, it provides a means for rapidly estimating the absolute optical properties of complex solids which are only available as loose powders.« less
Cross Check of NOvA Oscillation Probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parke, Stephen J.; Messier, Mark D.
2018-01-12
In this note we perform a cross check of the programs used by NOvA to calculate the 3-flavor oscillation probabilities with a independent program using a different method. The comparison is performed at 6 significant figures and the agreement,more » $$|\\Delta P|/P$$ is better than $$10^{-5}$$, as good as can be expected with 6 significant figures. In addition, a simple and accurate alternative method to calculate the oscillation probabilities is outlined and compared in the L/E range and matter density relevant for the NOvA experiment.« less
NASA Technical Reports Server (NTRS)
Mikulas, Martin M., Jr.; Nemeth, Michael P.; Oremont, Leonard; Jegley, Dawn C.
2011-01-01
Buckling loads for long isotropic and laminated cylinders are calculated based on Euler, Fluegge and Donnell's equations. Results from these methods are presented using simple parameters useful for fundamental design work. Buckling loads for two types of simply supported boundary conditions are calculated using finite element methods for comparison to select cases of the closed form solution. Results indicate that relying on Donnell theory can result in an over-prediction of buckling loads by as much as 40% in isotropic materials.
ERIC Educational Resources Information Center
Linton, J. Oliver
2017-01-01
There are five unique points in a star/planet system where a satellite can be placed whose orbital period is equal to that of the planet. Simple methods for calculating the positions of these points, or at least justifying their existence, are developed.
Determining Planck's Constant Using a Light-emitting Diode.
ERIC Educational Resources Information Center
Sievers, Dennis; Wilson, Alan
1989-01-01
Describes a method for making a simple, inexpensive apparatus which can be used to determine Planck's constant. Provides illustrations of a circuit diagram using one or more light-emitting diodes and a BASIC computer program for simplifying calculations. (RT)
New method: calculation of magnification factor from an intracardiac marker.
Cha, S D; Incarvito, J; Maranhao, V
1983-01-01
In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega-Carrillo, Hector Rene; Manzanares-Acuna, Eduardo; Hernandez-Davila, Victor Martin
The use of 131I is widely used in diagnostic and treatment of patients. If the patient is pregnant the 131I presence in the thyroid it becomes a source of constant exposition to other organs and the fetus. In this study the absorbed dose in the uterus of a 3 months pregnant woman with 131I in her thyroid gland has been calculated. The dose was determined using Monte Carlo methods in which a detailed model of the woman has been developed. The dose was also calculated using a simple procedure that was refined including the photons' attenuation in the woman organsmore » and body. To verify these results an experiment was carried out using a neck phantom with 131I. Comparing the results it was found that the simple calculation tend to overestimate the absorbed dose, by doing the corrections due to body and organs photon attenuation the dose is 0.14 times the Monte Carlo estimation.« less
The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghumman, S. S.
The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.
Wang, Yufang; Wu, Yanzhao; Feng, Min; Wang, Hui; Jin, Qinghua; Ding, Datong; Cao, Xuewei
2008-12-01
With a simple method-the reduced matrix method, we simplified the calculation of the phonon vibrational frequencies according to SWNTs structure and their phonon symmetric property and got the dispersion properties of all SWNTs at Gamma point in Brillouin zone, whose diameters lie between 0.6 and 2.5 nm. The calculating time is shrunk about 2-4 orders. A series of the dependent relationships between the diameters of SWNTs and the frequencies of Raman and IR active modes are given. Several fine structures including "glazed tile" structures in omega approximately d figures are found, which might predict a certain macro-quantum phenomenon of the phonons in SWNTs.
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
Calculating the Degradation Rate of Individual Proteins Using Xenopus Extract Systems.
McDowell, Gary S; Philpott, Anna
2018-05-16
The Xenopus extract system has been used extensively as a simple, quick, and robust method for assessing the stability of proteins against proteasomal degradation. In this protocol, methods are provided for assessing the half-life of in vitro translated radiolabeled proteins using Xenopus egg or embryo extracts. © 2019 Cold Spring Harbor Laboratory Press.
Sensitivity of Lumped Constraints Using the Adjoint Method
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.
1999-01-01
Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.
NASA Astrophysics Data System (ADS)
Sánchez Úbeda, Juan Pedro; Calvache Quesada, María Luisa; Duque Calvache, Carlos; López Chicano, Manuel; Martín Rosales, Wenceslao
2013-04-01
The hydraulic properties of coastal aquifer are essential for any estimation of groundwater flow with simple calculations or modelling techniques. Usually the application of slug test or tracers test are the techniques selected for solving the uncertainties. Other methods are based on the information associated to the changes induced by tidal fluctuation in coastal zones. The Tidal Response Method is a simple technique based in two different factors, tidal efficiency factor and time lag of the tidal oscillation regarding to hydraulic head oscillation caused into the aquifer. This method was described for a homogeneous and isotropic confined aquifer; however, it's applicable to unconfined aquifers when the ratio of maximum water table fluctuation and the saturated aquifer thickness is less than 0.02. Moreover, the tidal equations assume that the tidal signal follows a sinusoidal wave, but actually, the tidal wave is a set of simple harmonic components. Due to this, another methods based in the Fourier series have been applied in earlier studies trying to describe the tidal wave. Nevertheless, the Tidal Response Method represents an acceptable and useful technique in the Motril-Salobreña coastal aquifer. From recently hydraulic head data sets at discharge zone of the Motril-Salobreña aquifer have been calculated transmissivity values using different methods based in the tidal fluctuations and its effects on the hydraulic head. The effects of the tidal oscillation are detected in two boreholes of 132 m and 38 m depth located 300 m to the coastline. The main difficulties for the application of the method were the consideration of a confined aquifer and the variation of the effect at different depths (that is not included into the tidal equations), but these troubles were solved. In one hand, the assumption that the storage coefficient (S) in this unconfined aquifer is close to confined aquifers values due to the hydrogeological conditions at high depth and without saturation changes. In the other hand, we have monitored hydraulic head fluctuations due to tidal oscillations in different shallow boreholes close to the shoreline, and comparing with the deep ones. The calculated values with the tidal efficiency factor in the deep boreholes are about one less order of magnitude regarding to the obtained results with time lag method. Nevertheless, the application of these calculation methods based on tidal response in unconfined aquifers provides knowledge about the characteristics of the discharge zone and groundwater flow patterns, and it may be an easy and profitable alternative to traditional pumping tests.
NASA Astrophysics Data System (ADS)
Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.
2017-05-01
This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.
Calculation of two dimensional vortex/surface interference using panel methods
NASA Technical Reports Server (NTRS)
Maskew, B.
1980-01-01
The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-05-01
We have developed a simple method for solving the radiation transport equation, permitting us to rapidly calculate (with accuracy acceptable in practice) the diffuse reflection coeffi cient for a broad class of biological tissues in the spectral region of strong and weak absorption of light, and also the light flux distribution over the depth of the tissue. We show that it is feasible to use the proposed method for quantitative estimates of tissue parameters from its diffuse reflectance spectrum and also for selecting the irradiation dose which is optimal for a specifi c patient in laser therapy for various diseases.
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J
2016-01-01
Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can be tested empirically in clinical trials in which there is careful definition of treatment regimens related to key patient characteristics, alongside accurate measurement of health outcomes. PMID:27284242
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Daily simple interest formula. (1) To calculate daily simple interest the following formula may be used... a payment is due on April 1 and the payment is not made until April 11, a simple interest... equation calculates simple interest on any additional days beyond a monthly increment. (3) For example, if...
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Learning investment indicators through data extension
NASA Astrophysics Data System (ADS)
Dvořák, Marek
2017-07-01
Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.
NASA Astrophysics Data System (ADS)
Donahue, William; Newhauser, Wayne D.; Ziegler, James F.
2016-09-01
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Donahue, William; Newhauser, Wayne D; Ziegler, James F
2016-09-07
Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.
Approaches to reducing photon dose calculation errors near metal implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jessie Y.; Followill, David S.; Howell, Reb
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less
A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs
NASA Astrophysics Data System (ADS)
Li, Lei; Zhou, Wanting; Liu, Huihua
2012-12-01
This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Bentzen, S M R; Knudsen, V K; Christiensen, T; Ewers, B
2016-01-01
Background: Diet has an important role in the management of diabetes. However, little is known about dietary intake in Danish diabetes patients. A food frequency questionnaire (FFQ) focusing on most relevant nutrients in diabetes including carbohydrates, dietary fibres and simple sugars was developed and validated. Objectives: To examine the relative validity of nutrients calculated by a web-based food frequency questionnaire for patients with diabetes. Design: The FFQ was validated against a 4-day pre-coded food diary (FD). Intakes of nutrients were calculated. Means of intake were compared and cross-classifications of individuals according to intake were performed. To assess the agreement between the two methods, Pearson and Spearman's correlation coefficients and weighted kappa coefficients were calculated. Subjects: Ninety patients (64 with type 1 diabetes and 26 with type 2 diabetes) accepted to participate in the study. Twenty-six were excluded from the final study population. Setting: 64 volunteer diabetes patients at the Steno Diabetes Center. Results: Intakes of carbohydrates, simple sugars, dietary fibres and total energy were higher according to the FFQ compared with the FD. However, intakes of nutrients were grossly classified in the same or adjacent quartiles with an average of 82% of the selected nutrients when comparing the two methods. In general, moderate agreement between the two methods was found. Conclusion: The FFQ was validated for assessment of a range of nutrients. Comparing the intakes of selected nutrients (carbohydrates, dietary fibres and simple sugars), patients were classified correctly according to low and high intakes. The FFQ is a reliable dietary assessment tool to use in research and evaluation of patient education for patients with diabetes. PMID:27669176
NASA Technical Reports Server (NTRS)
Evleth, E. M.
1972-01-01
Stabilities of nitrogen containing heterocyclic radicals were studied to detect radicals of the type R-N-R, and to theoretically rationalize their electronic structure. The computation of simple potential energy surfaces for ground and excited states is discussed along with the photophysical properties of indolizine. Methods of calculation and problems associated with the calculations are presented. Results, tables, diagrams, discussions, and references are included.
NASA Astrophysics Data System (ADS)
Zayed, Elsayed M. E.; Al-Nowehy, Abdul-Ghani; El-Ganaini, Shoukry; Shohib, Reham M. A.
2018-06-01
This note concerns the doubtful Khater method included in the above two papers. We show by simple calculation that Khater method is not true. The solutions of the proposed nonlinear equations in the above two papers are all not true too.
Are artificial opals non-close-packed fcc structures?
NASA Astrophysics Data System (ADS)
García-Santamaría, F.; Braun, P. V.
2007-06-01
The authors report a simple experimental method to accurately measure the volume fraction of artificial opals. The results are modeled using several methods, and they find that some of the most common yield very inaccurate results. Both finite size and substrate effects play an important role in calculations of the volume fraction. The experimental results show that the interstitial pore volume is 4%-15% larger than expected for close-packed structures. Consequently, calculations performed in previous work relating the amount of material synthesized in the opal interstices with the optical properties may need revision, especially in the case of high refractive index materials.
Modeling the surface evapotranspiration over the southern Great Plains
NASA Technical Reports Server (NTRS)
Liljegren, J. C.; Doran, J. C.; Hubbe, J. M.; Shaw, W. J.; Zhong, S.; Collatz, G. J.; Cook, D. R.; Hart, R. L.
1996-01-01
We have developed a method to apply the Simple Biosphere Model of Sellers et al to calculate the surface fluxes of sensible heat and water vapor at high spatial resolution over the domain of the US DOE's Cloud and Radiation Testbed (CART) in Kansas and Oklahoma. The CART, which is within the GCIP area of interest for the Mississippi River Basin, is an extensively instrumented facility operated as part of the DOE's Atmospheric Radiation Measurement (ARM) program. Flux values calculated with our method will be used to provide lower boundary conditions for numerical models to study the atmosphere over the CART domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauhan, Chetna, E-mail: chetna.chauhan@nirmauni.ac.in; Jotania, Rajshree, E-mail: rbjotania@gmail.com
2016-05-06
The W-type barium hexaferrite was prepared using a simple heat treatment method. The precursor was calcinated at 650°C for 3 hours and then slowly cooled to room temperature in order to obtain barium cobalt hexaferrite powder. The prepared powder was characterised by different experimental techniques like XRD, FTIR and SEM. The X-ray diffractogram of the sample shows W-and M phases. The particle size calculated by Debye Scherrer formula. The FTIR spectra of the sample was taken at room temperature by using KBr pallet method which confirms the formation of hexaferrite phase. The morphological study on the hexaferrite powder was carriedmore » out by SEM analysis.« less
X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo
2017-12-01
Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.
Ground-state energies of simple metals
NASA Technical Reports Server (NTRS)
Hammerberg, J.; Ashcroft, N. W.
1974-01-01
A structural expansion for the static ground-state energy of a simple metal is derived. Two methods are presented, one an approach based on single-particle band structure which treats the electron gas as a nonlinear dielectric, the other a more general many-particle analysis using finite-temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi-surface distortions, and chemical-potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron-ion interaction and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero-temperature thermodynamic functions of atomic hydrogen are reported.
Dynamic modeling of parallel robots for computed-torque control implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codourey, A.
1998-12-01
In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less
Mutual influence of molecular diffusion in gas and surface phases
NASA Astrophysics Data System (ADS)
Hori, Takuma; Kamino, Takafumi; Yoshimoto, Yuta; Takagi, Shu; Kinefuchi, Ikuya
2018-01-01
We develop molecular transport simulation methods that simultaneously deal with gas- and surface-phase diffusions to determine the effect of surface diffusion on the overall diffusion coefficients. The phenomenon of surface diffusion is incorporated into the test particle method and the mean square displacement method, which are typically employed only for gas-phase transport. It is found that for a simple cylindrical pore, the diffusion coefficients in the presence of surface diffusion calculated by these two methods show good agreement. We also confirm that both methods reproduce the analytical solution. Then, the diffusion coefficients for ink-bottle-shaped pores are calculated using the developed method. Our results show that surface diffusion assists molecular transport in the gas phase. Moreover, the surface tortuosity factor, which is known to be uniquely determined by physical structure, is influenced by the presence of gas-phase diffusion. This mutual influence of gas-phase diffusion and surface diffusion indicates that their simultaneous calculation is necessary for an accurate evaluation of the diffusion coefficients.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew
2005-08-01
The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.
New method for solving inductive electric fields in the non-uniformly conducting ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.; Amm, O.; Viljanen, A.
2006-10-01
We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.
A simple formula for predicting claw volume of cattle.
Scott, T D; Naylor, J M; Greenough, P R
1999-11-01
The object of this study was to develop a simple method for accurately calculating the volume of bovine claws under field conditions. The digits of 30 slaughterhouse beef cattle were examined and the following four linear measurements taken from each pair of claws: (1) the length of the dorsal surface of the claw (Toe); (2) the length of the coronary band (CorBand); (3) the length of the bearing surface (Base); and (4) the height of the claw at the abaxial groove (AbaxGr). Measurements of claw volume using a simple hydrometer were highly repeatable (r(2)= 0.999) and could be calculated from linear measurements using the formula:Claw Volume (cm(3)) = (17.192 x Base) + (7.467 x AbaxGr) + 45.270 x (CorBand) - 798.5This formula was found to be accurate (r(2)= 0.88) when compared to volume data derived from a hydrometer displacement procedure. The front claws occupied 54% of the total volume compared to 46% for the hind claws. Copyright 1999 Harcourt Publishers Ltd.
Estimating Lake Volume from Limited Data: A Simple GIS Approach
Lake volume provides key information for estimating residence time or modeling pollutants. Methods for calculating lake volume have relied on dated technologies (e.g. planimeters) or used potentially inaccurate assumptions (e.g. volume of a frustum of a cone). Modern GIS provid...
Analytical Tools in School Finance Reform.
ERIC Educational Resources Information Center
Johns, R. L.
This paper discusses the problem of analyzing variations in the educational opportunities provided by different school districts and describes how to assess the impact of school finance alternatives through use of various analytical tools. The author first examines relatively simple analytical methods, including calculation of per-pupil…
Atomic Calculations with a One-Parameter, Single Integral Method.
ERIC Educational Resources Information Center
Baretty, Reinaldo; Garcia, Carmelo
1989-01-01
Presents an energy function E(p) containing a single integral and one variational parameter, alpha. Represents all two-electron integrals within the local density approximation as a single integral. Identifies this as a simple treatment for use in an introductory quantum mechanics course. (MVL)
String and Sticky Tape Experiments: Refractive Index of Liquids.
ERIC Educational Resources Information Center
Edge, R. D., Ed.
1979-01-01
Describes a simple method of measuring the refractive index of a liquid using a paper cup, a liquid, a pencil, and a ruler. Uses the ratio between the actual depth and the apparent depth of the cup to calculate the refractive index. (GA)
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
CALCULATION OF GAMMA SPECTRA IN A PLASTIC SCINTILLATOR FOR ENERGY CALIBRATIONAND DOSE COMPUTATION.
Kim, Chankyu; Yoo, Hyunjun; Kim, Yewon; Moon, Myungkook; Kim, Jong Yul; Kang, Dong Uk; Lee, Daehee; Kim, Myung Soo; Cho, Minsik; Lee, Eunjoong; Cho, Gyuseong
2016-09-01
Plastic scintillation detectors have practical advantages in the field of dosimetry. Energy calibration of measured gamma spectra is important for dose computation, but it is not simple in the plastic scintillators because of their different characteristics and a finite resolution. In this study, the gamma spectra in a polystyrene scintillator were calculated for the energy calibration and dose computation. Based on the relationship between the energy resolution and estimated energy broadening effect in the calculated spectra, the gamma spectra were simply calculated without many iterations. The calculated spectra were in agreement with the calculation by an existing method and measurements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The use of National Weather Service Data to Compute the Dose to the MEOI.
Vickers, Linda
2018-05-01
The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.
A combined representation method for use in band structure calculations. 1: Method
NASA Technical Reports Server (NTRS)
Friedli, C.; Ashcroft, N. W.
1975-01-01
A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.
An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore
Jackson, J.C.; Ericksent, G.E.
1997-01-01
Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.
An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore
John, C.; George, J.; Ericksen, E.
1997-01-01
Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Caffrey, Emily A; Johansen, Mathew P; Higley, Kathryn A
2015-10-01
Radiological dosimetry for nonhuman biota typically relies on calculations that utilize the Monte Carlo simulations of simple, ellipsoidal geometries with internal radioactivity distributed homogeneously throughout. In this manner it is quick and easy to estimate whole-body dose rates to biota. Voxel models are detailed anatomical phantoms that were first used for calculating radiation dose to humans, which are now being extended to nonhuman biota dose calculations. However, if simple ellipsoidal models provide conservative dose-rate estimates, then the additional labor involved in creating voxel models may be unnecessary for most scenarios. Here we show that the ellipsoidal method provides conservative estimates of organ dose rates to small mammals. Organ dose rates were calculated for environmental source terms from Maralinga, the Nevada Test Site, Hanford and Fukushima using both the ellipsoidal and voxel techniques, and in all cases the ellipsoidal method yielded more conservative dose rates by factors of 1.2-1.4 for photons and 5.3 for beta particles. Dose rates for alpha-emitting radionuclides are identical for each method as full energy absorption in source tissue is assumed. The voxel procedure includes contributions to dose from organ-to-organ irradiation (shown here to comprise 2-50% of total dose from photons and 0-93% of total dose from beta particles) that is not specifically quantified in the ellipsoidal approach. Overall, the voxel models provide robust dosimetry for the nonhuman mammals considered in this study, and though the level of detail is likely extraneous to demonstrating regulatory compliance today, voxel models may nevertheless be advantageous in resolving ongoing questions regarding the effects of ionizing radiation on wildlife.
Levelized Cost of Energy Calculator | Energy Analysis | NREL
Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Infinitely Dilute Partial Molar Properties of Proteins from Computer Simulation
2015-01-01
A detailed understanding of temperature and pressure effects on an infinitely dilute protein’s conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method’s feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages. PMID:25325571
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide
2008-07-08
For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less
Exciton Absorption Spectra by Linear Response Methods:Application to Conjugated Polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosquera, Martin A.; Jackson, Nicholas E.; Fauvell, Thomas J.
The theoretical description of the timeevolution of excitons requires, as an initial step, the calculation of their spectra, which has been inaccessible to most users due to the high computational scaling of conventional algorithms and accuracy issues caused by common density functionals. Previously (J. Chem. Phys. 2016, 144, 204105), we developed a simple method that resolves these issues. Our scheme is based on a two-step calculation in which a linear-response TDDFT calculation is used to generate orbitals perturbed by the excitonic state, and then a second linear-response TDDFT calculation is used to determine the spectrum of excitations relative to themore » excitonic state. Herein, we apply this theory to study near-infrared absorption spectra of excitons in oligomers of the ubiquitous conjugated polymers poly(3-hexylthiophene) (P3HT), poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV), and poly(benzodithiophene-thieno[3,4-b]thiophene) (PTB7). For P3HT and MEH-PPV oligomers, the calculated intense absorption bands converge at the longest wavelengths for 10 monomer units, and show strong consistency with experimental measurements. The calculations confirm that the exciton spectral features in MEH-PPV overlap with those of the bipolaron formation. In addition, our calculations identify the exciton absorption bands in transient absorption spectra measured by our group for oligomers (1, 2, and 3 units) of PTB7. For all of the cases studied, we report the dominant orbital excitations contributing to the optically active excited state-excited state transitions, and suggest a simple rule to identify absorption peaks at the longest wavelengths. We suggest our methodology could be considered for further evelopments in theoretical transient spectroscopy to include nonadiabatic effects, coherences, and to describe the formation of species such as charge-transfer states and polaron pairs.« less
New approach to analyzing soil-building systems
Safak, E.
1998-01-01
A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.
Drell-Yan Lepton pair production at NNLO QCD with parton showers
Hoeche, Stefan; Li, Ye; Prestel, Stefan
2015-04-13
We present a simple approach to combine NNLO QCD calculations and parton showers, based on the UNLOPS technique. We apply the method to the computation of Drell-Yan lepton-pair production at the Large Hadron Collider. We comment on possible improvements and intrinsic uncertainties.
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
Analysis of Franck-Condon factors for CO+ molecule using the Fourier Grid Hamiltonian method
NASA Astrophysics Data System (ADS)
Syiemiong, Arnestar; Swer, Shailes; Jha, Ashok Kumar; Saxena, Atul
2018-04-01
Franck-Condon factors (FCFs) are important parameters and it plays a very important role in determining the intensities of the vibrational bands in electronic transitions. In this paper, we illustrate the Fourier Grid Hamiltonian (FGH) method, a relatively simple method to calculate the FCFs. The FGH is a method used for calculating the vibrational eigenvalues and eigenfunctions of bound electronic states of diatomic molecules. The obtained vibrational wave functions for the ground and the excited states are used to calculate the vibrational overlap integral and then the FCFs. In this computation, we used the Morse potential and Bi-Exponential potential model for constructing and diagonalizing the molecular Hamiltonians. The effects of the change in equilibrium internuclear distance (xe), dissociation energy (De), and the nature of the excited state electronic energy curve on the FCFs have been determined. Here we present our work for the qualitative analysis of Franck-Condon Factorsusing this Fourier Grid Hamiltonian Method.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
A multispectral imaging approach for diagnostics of skin pathologies
NASA Astrophysics Data System (ADS)
Lihacova, Ilze; Derjabo, Aleksandrs; Spigulis, Janis
2013-06-01
Noninvasive multispectral imaging method was applied for different skin pathology such as nevus, basal cell carcinoma, and melanoma diagnostics. Developed melanoma diagnostic parameter, using three spectral bands (540 nm, 650 nm and 950 nm), was calculated for nevus, melanoma and basal cell carcinoma. Simple multispectral diagnostic device was established and applied for skin assessment. Development and application of multispectral diagnostics method described further in this article.
Production of bismuth-204 for medical use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinsley, M.T.; Lebowitz, E.; Baranosky, J.
1973-11-01
A method is described for producing practical quantities of highpuriti, / sup 204/Bi by, the /sup 206/Pb(p,3n)/sup 204/Bi reaction. A simple elec trolytic separation method with good yield has been developed. The cross section for the above reaction was calculated for 32,MeV protons. Decay data for /supb 204/Bi-/ sup 204m/Pb equilibrium samples are also reported. (auth)
SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavignet, A.A.; Wick, C.J.
In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less
A simplified design of the staggered herringbone micromixer for practical applications
Du, Yan; Zhang, Zhiyi; Yim, ChaeHo; Lin, Min; Cao, Xudong
2010-01-01
We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length Lm as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since Lm is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications. PMID:20697584
A simplified design of the staggered herringbone micromixer for practical applications.
Du, Yan; Zhang, Zhiyi; Yim, Chaeho; Lin, Min; Cao, Xudong
2010-05-07
We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length L(m) as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since L(m) is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications.
Visualization of atomic-scale phenomena in superconductors: application to FeSe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas
Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less
Visualization of atomic-scale phenomena in superconductors: application to FeSe
Choubey, Peayush; Berlijn, Tom; Kreisel, Andreas; ...
2014-10-31
Here we propose a simple method of calculating inhomogeneous, atomic-scale phenomena in superconductors which makes use of the wave function information traditionally discarded in the construction of tight-binding models used in the Bogoliubov-de Gennes equations. The method uses symmetry- based first principles Wannier functions to visualize the effects of superconducting pairing on the distribution of electronic states over atoms within a crystal unit cell. Local symmetries lower than the global lattice symmetry can thus be exhibited as well, rendering theoretical comparisons with scanning tunneling spectroscopy data much more useful. As a simple example, we discuss the geometric dimer states observedmore » near defects in superconducting FeSe.« less
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
Icing Branch Current Research Activities in Icing Physics
NASA Technical Reports Server (NTRS)
Vargas, Mario
2009-01-01
Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
The time-resolved photoelectron spectrum of toluene using a perturbation theory approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richings, Gareth W.; Worth, Graham A., E-mail: g.a.worth@bham.ac.uk
A theoretical study of the intra-molecular vibrational-energy redistribution of toluene using time-resolved photo-electron spectra calculated using nuclear quantum dynamics and a simple, two-mode model is presented. Calculations have been carried out using the multi-configuration time-dependent Hartree method, using three levels of approximation for the calculation of the spectra. The first is a full quantum dynamics simulation with a discretisation of the continuum wavefunction of the ejected electron, whilst the second uses first-order perturbation theory to calculate the wavefunction of the ion. Both methods rely on the explicit inclusion of both the pump and probe laser pulses. The third method includesmore » only the pump pulse and generates the photo-electron spectrum by projection of the pumped wavepacket onto the ion potential energy surface, followed by evaluation of the Fourier transform of the autocorrelation function of the subsequently propagated wavepacket. The calculations performed have been used to study the periodic population flow between the 6a and 10b16b modes in the S{sub 1} excited state, and compared to recent experimental data. We obtain results in excellent agreement with the experiment and note the efficiency of the perturbation method.« less
NASA Astrophysics Data System (ADS)
Arce, Julio Cesar
1992-01-01
This work focuses on time-dependent quantum theory and methods for the study of the spectra and dynamics of atomic and molecular systems. Specifically, we have addressed the following two problems: (i) Development of a time-dependent spectral method for the construction of spectra of simple quantum systems--This includes the calculation of eigenenergies, the construction of bound and continuum eigenfunctions, and the calculation of photo cross-sections. Computational applications include the quadrupole photoabsorption spectra and dissociation cross-sections of molecular hydrogen from various vibrational states in its ground electronic potential -energy curve. This method is seen to provide an advantageous alternative, both from the computational and conceptual point of view, to existing standard methods. (ii) Explicit time-dependent formulation of photoabsorption processes --Analytical solutions of the time-dependent Schrodinger equation are constructed and employed for the calculation of probability densities, momentum distributions, fluxes, transition rates, expectation values and correlation functions. These quantities are seen to establish the link between the dynamics and the calculated, or measured, spectra and cross-sections, and to clarify the dynamical nature of the excitation, transition and ejection processes. Numerical calculations on atomic and molecular hydrogen corroborate and complement the previous results, allowing the identification of different regimes during the photoabsorption process.
Inelastic transport theory from first principles: Methodology and application to nanoscale devices
NASA Astrophysics Data System (ADS)
Frederiksen, Thomas; Paulsson, Magnus; Brandbyge, Mads; Jauho, Antti-Pekka
2007-05-01
We describe a first-principles method for calculating electronic structure, vibrational modes and frequencies, electron-phonon couplings, and inelastic electron transport properties of an atomic-scale device bridging two metallic contacts under nonequilibrium conditions. The method extends the density-functional codes SIESTA and TRANSIESTA that use atomic basis sets. The inelastic conductance characteristics are calculated using the nonequilibrium Green’s function formalism, and the electron-phonon interaction is addressed with perturbation theory up to the level of the self-consistent Born approximation. While these calculations often are computationally demanding, we show how they can be approximated by a simple and efficient lowest order expansion. Our method also addresses effects of energy dissipation and local heating of the junction via detailed calculations of the power flow. We demonstrate the developed procedures by considering inelastic transport through atomic gold wires of various lengths, thereby extending the results presented in Frederiksen [Phys. Rev. Lett. 93, 256601 (2004)]. To illustrate that the method applies more generally to molecular devices, we also calculate the inelastic current through different hydrocarbon molecules between gold electrodes. Both for the wires and the molecules our theory is in quantitative agreement with experiments, and characterizes the system-specific mode selectivity and local heating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.
2015-12-01
We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated atmore » various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
Simplified method for the calculation of irregular waves in the coastal zone
NASA Astrophysics Data System (ADS)
Leont'ev, I. O.
2011-04-01
A method applicable for the estimation of the wave parameters along a set bottom profile is suggested. It takes into account the principal processes having an influence on the waves in the coastal zone: the transformation, refraction, bottom friction, and breaking. The ability to use a constant mean value of the friction coefficient under conditions of sandy shores is implied. The wave breaking is interpreted from the viewpoint of the concept of the limiting wave height at a given depth. The mean and root-mean-square wave heights are determined by the height distribution function, which transforms under the effect of the breaking. The verification of the method on the basis of the natural data shows that the calculation results reproduce the observed variations of the wave heights in a wide range of conditions, including profiles with underwater bars. The deviations from the calculated values mostly do not exceed 25%, and the mean square error is 11%. The method does not require a preliminary setting and can be implemented in the form of a relatively simple calculator accessible even for an inexperienced user.
Performance of quantum Monte Carlo for calculating molecular bond lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less
Barringer, J.L.; Johnsson, P.A.
1996-01-01
Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.
Introductory Linear Regression Programs in Undergraduate Chemistry.
ERIC Educational Resources Information Center
Gale, Robert J.
1982-01-01
Presented are simple programs in BASIC and FORTRAN to apply the method of least squares. They calculate gradients and intercepts and express errors as standard deviations. An introduction of undergraduate students to such programs in a chemistry class is reviewed, and issues instructors should be aware of are noted. (MP)
Using a Simple Optical Rangefinder To Teach Similar Triangles.
ERIC Educational Resources Information Center
Cuicchi, Paul M.; Hutchison, Paul S.
2003-01-01
Describes how the concept of similar triangles was taught using an optical method of estimating large distances as a corresponding activity. Includes the derivation of a formula to calculate one source of measurement error and is a nice exercise in the use of the properties of similar triangles. (Author/NB)
Multiple Contact Dates and SARS Incubation Periods
2004-01-01
Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684
Simple computer method provides contours for radiological images
NASA Technical Reports Server (NTRS)
Newell, J. D.; Keller, R. A.; Baily, N. A.
1975-01-01
Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Traino, A. C.; Xhafa, B.; Sezione di Fisica Medica, U.O. Fisica Sanitaria, Azienda Ospedaliero-Universitaria Pisana, via Roma n. 67, Pisa 56125
2009-04-15
One of the major challenges to the more widespread use of individualized, dosimetry-based radioiodine treatment of Graves' disease is the development of a reasonably fast, simple, and cost-effective method to measure thyroidal {sup 131}I kinetics in patients. Even though the fixed activity administration method does not optimize the therapy, giving often too high or too low a dose to the gland, it provides effective treatment for almost 80% of patients without consuming excessive time and resources. In this article two simple methods for the evaluation of the kinetics of {sup 131}I in the thyroid gland are presented and discussed. Themore » first is based on two measurements 4 and 24 h after a diagnostic {sup 131}I administration and the second on one measurement 4 h after such an administration and a linear correlation between this measurement and the maximum uptake in the thyroid. The thyroid absorbed dose calculated by each of the two methods is compared to that calculated by a more complete {sup 131}I kinetics evaluation, based on seven thyroid uptake measurements for 35 patients at various times after the therapy administration. There are differences in the thyroid absorbed doses between those derived by each of the two simpler methods and the ''reference'' value (derived by more complete uptake measurements following the therapeutic {sup 131}I administration), with 20% median and 40% 90-percentile differences for the first method (i.e., based on two thyroid uptake measurements at 4 and 24 h after {sup 131}I administration) and 25% median and 45% 90-percentile differences for the second method (i.e., based on one measurement at 4 h post-administration). Predictably, although relatively fast and convenient, neither of these simpler methods appears to be as accurate as thyroid dose estimates based on more complete kinetic data.« less
Numerical noise prediction in fluid machinery
NASA Astrophysics Data System (ADS)
Pantle, Iris; Magagnato, Franco; Gabi, Martin
2005-09-01
Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.
Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu
2011-04-01
A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.
Computation of the dipole moments of proteins.
Antosiewicz, J
1995-10-01
A simple and computationally feasible procedure for the calculation of net charges and dipole moments of proteins at arbitrary pH and salt conditions is described. The method is intended to provide data that may be compared to the results of transient electric dichroism experiments on protein solutions. The procedure consists of three major steps: (i) calculation of self energies and interaction energies for ionizable groups in the protein by using the finite-difference Poisson-Boltzmann method, (ii) determination of the position of the center of diffusion (to which the calculated dipole moment refers) and the extinction coefficient tensor for the protein, and (iii) generation of the equilibrium distribution of protonation states of the protein by a Monte Carlo procedure, from which mean and root-mean-square dipole moments and optical anisotropies are calculated. The procedure is applied to 12 proteins. It is shown that it gives hydrodynamic and electrical parameters for proteins in good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Boundary condition computational procedures for inviscid, supersonic steady flow field calculations
NASA Technical Reports Server (NTRS)
Abbett, M. J.
1971-01-01
Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.
Gifford, Katherine A; Phillips, Jeffrey S; Samuels, Lauren R; Lane, Elizabeth M; Bell, Susan P; Liu, Dandan; Hohman, Timothy J; Romano, Raymond R; Fritzsche, Laura R; Lu, Zengqi; Jefferson, Angela L
2015-07-01
A symptom of mild cognitive impairment (MCI) and Alzheimer's disease (AD) is a flat learning profile. Learning slope calculation methods vary, and the optimal method for capturing neuroanatomical changes associated with MCI and early AD pathology is unclear. This study cross-sectionally compared four different learning slope measures from the Rey Auditory Verbal Learning Test (simple slope, regression-based slope, two-slope method, peak slope) to structural neuroimaging markers of early AD neurodegeneration (hippocampal volume, cortical thickness in parahippocampal gyrus, precuneus, and lateral prefrontal cortex) across the cognitive aging spectrum [normal control (NC); (n=198; age=76±5), MCI (n=370; age=75±7), and AD (n=171; age=76±7)] in ADNI. Within diagnostic group, general linear models related slope methods individually to neuroimaging variables, adjusting for age, sex, education, and APOE4 status. Among MCI, better learning performance on simple slope, regression-based slope, and late slope (Trial 2-5) from the two-slope method related to larger parahippocampal thickness (all p-values<.01) and hippocampal volume (p<.01). Better regression-based slope (p<.01) and late slope (p<.01) were related to larger ventrolateral prefrontal cortex in MCI. No significant associations emerged between any slope and neuroimaging variables for NC (p-values ≥.05) or AD (p-values ≥.02). Better learning performances related to larger medial temporal lobe (i.e., hippocampal volume, parahippocampal gyrus thickness) and ventrolateral prefrontal cortex in MCI only. Regression-based and late slope were most highly correlated with neuroimaging markers and explained more variance above and beyond other common memory indices, such as total learning. Simple slope may offer an acceptable alternative given its ease of calculation.
1986-06-01
D’INTENSITE 0 APPLIQUEE A LA PROPAGATION "ANORNALE" par D. Dion DEFENCE RESEARCH ESTABLISHMENT CENTRE DE RECHERCHES POUR LA DEFENSE VALCARTI ER 6- Tel: (418...faqon ils sont reli~s aux conditions atmosph~riques. Les ph~no- manes les plus importants A signaler sont les conduits et les "trous radio". En effet...6tant tr~s fr~quents en mer, 11 est d’int&rt pour la marine de rechercher des m~thodes simples permettant de les caract~riser. Des 6quations d’int
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
Theoretical calculation of heat of formation and heat of combustion for several flammable gases.
Kondo, Shigeo; Takahashi, Akifumi; Tokuhashi, Kazuaki
2002-09-02
Heats of formation have been calculated by the Gaussian-2 (G2) and/or G2MP2 method for a number of flammable gases. As a result, it has been found that the calculated heat of formation for compounds containing, such atoms as fluorine and chlorine tends to deviate from the observed values more than calculations for other molecules do. A simple atom additivity correction (AAC) has been found effective to improve the quality of the heat of formation calculation from the G2 and G2MP2 theories for these molecules. The values of heat of formation thus obtained have been used to calculate the heat of combustion and related constants for evaluating the combustion hazard of flammable gases.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
Optimized emission in nanorod arrays through quasi-aperiodic inverse design.
Anderson, P Duke; Povinelli, Michelle L
2015-06-01
We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.
Lithium cluster anions: photoelectron spectroscopy and ab initio calculations.
Alexandrova, Anastassia N; Boldyrev, Alexander I; Li, Xiang; Sarkas, Harry W; Hendricks, Jay H; Arnold, Susan T; Bowen, Kit H
2011-01-28
Structural and energetic properties of small, deceptively simple anionic clusters of lithium, Li(n)(-), n = 3-7, were determined using a combination of anion photoelectron spectroscopy and ab initio calculations. The most stable isomers of each of these anions, the ones most likely to contribute to the photoelectron spectra, were found using the gradient embedded genetic algorithm program. Subsequently, state-of-the-art ab initio techniques, including time-dependent density functional theory, coupled cluster, and multireference configurational interactions methods, were employed to interpret the experimental spectra.
Power and sample size for multivariate logistic modeling of unmatched case-control studies.
Gail, Mitchell H; Haneuse, Sebastien
2017-01-01
Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
75 FR 57719 - Federal Acquisition Regulation; TINA Interest Calculations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-22
... the term ``simple interest'' as the requirement for calculating interest for TINA cost impacts with.... Revising the date of the clause; and b. Removing from paragraph (e)(1) ``Simple interest'' and adding...) ``Simple interest'' and adding ``Interest compounded daily, as required by 26 U.S.C. 6622,'' in its place...
Performance of some numerical Laplace inversion methods on American put option formula
NASA Astrophysics Data System (ADS)
Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.
2018-03-01
Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.
Pan, Wenxiao; Daily, Michael; Baker, Nathan A.
2015-05-07
Background: The calculation of diffusion-controlled ligand binding rates is important for understanding enzyme mechanisms as well as designing enzyme inhibitors. Methods: We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) BC, is considered on the reactive boundaries. This new BC treatment allows for the analysis of enzymes with “imperfect” reaction rates. Results: The numerical method is first verified in simple systems and thenmore » applied to the calculation of ligand binding to a mouse acetylcholinesterase (mAChE) monomer. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Conclusions: Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
A NEW METHOD FOR ENVIRONMENTAL FLOW ASSESSMENT BASED ON BASIN GEOLOGY. APPLICATION TO EBRO BASIN.
2018-02-01
The determination of environmental flows is one of the commonest practical actions implemented on European rivers to promote their good ecological status. In Mediterranean rivers, groundwater inflows are a decisive factor in streamflow maintenance. This work examines the relationship between the lithological composition of the Ebro basin (Spain) and dry season flows in order to establish a model that can assist in the calculation of environmental flow rates.Due to the lack of information on the hydrogeological characteristics of the studied basin, the variable representing groundwater inflows has been estimated in a very simple way. The explanatory variable used in the proposed model is easy to calculate and is sufficiently powerful to take into account all the required characteristics.The model has a high coefficient of determination, indicating that it is accurate for the intended purpose. The advantage of this method compared to other methods is that it requires very little data and provides a simple estimate of environmental flow. It is also independent of the basin area and the river section order.The results of this research also contribute to knowledge of the variables that influence low flow periods and low flow rates on rivers in the Ebro basin.
Self-learning kinetic Monte Carlo simulations of Al diffusion in Mg
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandipati, Giridhar; Govind, Niranjan; Andersen, Amity
2016-03-16
Atomistic on-lattice self-learning kinetic Monte Carlo (SLKMC) method was used to examine the vacancy-mediated diffusion of an Al atom in pure hcp Mg. Local atomic environment dependent activation barriers for vacancy-atom exchange processes were calculated on-the-fly using climbing image nudged-elastic band method (CI-NEB) and using a Mg-Al binary modified embedded-atom method (MEAM) interatomic potential. Diffusivities of vacancy and Al atom in pure Mg were obtained from SLKMC simulations and are compared with values available in the literature that are obtained from experiments and first-principle calculations. Al Diffusivities obtained from SLKMC simulations are lower, due to larger activation barriers and lowermore » diffusivity prefactors, than those available in the literature but have same order of magnitude. We present all vacancy-Mg and vacancy-Al atom exchange processes and their activation barriers that were identified in SLKMC simulations. We will describe a simple mapping scheme to map a hcp lattice on to a simple cubic lattice that would enable hcp lattices to be simulated in an on-lattice KMC framework. We also present the pattern recognition scheme used in SLKMC simulations.« less
Correlation between polar values and vector analysis.
Naeser, K; Behrens, J K
1997-01-01
To evaluate the possible correlation between polar value and vector analysis assessment of surgically induced astigmatism. Department of Ophthalmology, Aalborg Sygehus Syd, Denmark. The correlation between polar values and vector analysis was evaluated by simple mathematical and optical methods using accepted principles of trigonometry and first-order optics. Vector analysis and polar values report different aspects of surgically induced astigmatism. Vector analysis describes the total astigmatic change, characterized by both astigmatic magnitude and direction, while the polar value method produces a single, reduced figure that reports flattening or steepening in preselected directions, usually the plane of the surgical meridian. There is a simple Pythagorean correlation between vector analysis and two polar values separated by an arch of 45 degrees. The polar value calculated in the surgical meridian indicates the power or the efficacy of the surgical procedure. The polar value calculated in a plane inclined 45 degrees to the surgical meridian indicates the degree of cylinder rotation induced by surgery. These two polar values can be used to obtain other relevant data such as magnitude, direction, and sphere of an induced cylinder. Consistent use of these methods will enable surgeons to control and in many cases reduce preoperative astigmatism.
New method for calculating the coupling coefficient in graded index optical fibers
NASA Astrophysics Data System (ADS)
Savović, Svetislav; Djordjevich, Alexandar
2018-05-01
A simple method is proposed for determining the mode coupling coefficient D in graded index multimode optical fibers. It only requires observation of the output modal power distribution P(m, z) for one fiber length z as the Gaussian launching modal power distribution changes, with the Gaussian input light distribution centered along the graded index optical fiber axis (θ0 = 0) without radial offset (r0 = 0). A similar method we previously proposed for calculating the coupling coefficient D in a step-index multimode optical fibers where the output angular power distributions P(θ, z) for one fiber length z with the Gaussian input light distribution launched centrally along the step-index optical fiber axis (θ0 = 0) is needed to be known.
Genetic Algorithms and Their Application to the Protein Folding Problem
1993-12-01
and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This
Better Than Counting: Density Profiles from Force Sampling
NASA Astrophysics Data System (ADS)
de las Heras, Daniel; Schmidt, Matthias
2018-05-01
Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Yuan, Ying; He, Xiao-Song; Xi, Bei-Dou; Wei, Zi-Min; Tan, Wen-Bing; Gao, Ru-Tai
2016-11-01
Vulnerability assessment of simple landfills was conducted using the multimedia, multipathway and multireceptor risk assessment (3MRA) model for the first time in China. The minimum safe threshold of six contaminants (benzene, arsenic (As), cadmium (Cd), hexavalent chromium [Cr(VI)], divalent mercury [Hg(II)] and divalent nickel [Ni(II)]) in landfill and waste pile models were calculated by the 3MRA model. Furthermore, the vulnerability indexes of the six contaminants were predicted based on the model calculation. The results showed that the order of health risk vulnerability index was As > Hg(II) > Cr(VI) > benzene > Cd > Ni(II) in the landfill model, whereas the ecology risk vulnerability index was in the order of As > Hg(II) > Cr(VI) > Cd > benzene > Ni(II). In the waste pile model, the order of health risk vulnerability index was benzene > Hg(II) > Cr(VI) > As > Cd and Ni(II), whereas the ecology risk vulnerability index was in the order of Hg(II) > Cd > Cr(VI) > As > benzene > Ni(II). These results indicated that As, Hg(II) and Cr(VI) were the high risk contaminants for the case of a simple landfill in China; the concentration of these in soil and groundwater around the simple landfill should be strictly monitored, and proper mediation is also recommended for simple landfills with a high concentration of contaminants. © The Author(s) 2016.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
The band gap properties of the three-component semi-infinite plate-like LRPC by using PWE/FE method
NASA Astrophysics Data System (ADS)
Qian, Denghui; Wang, Jianchun
2018-06-01
This paper applies coupled plane wave expansion and finite element (PWE/FE) method to calculate the band structure of the proposed three-component semi-infinite plate-like locally resonant phononic crystal (LRPC). In order to verify the accuracy of the result, the band structure calculated by PWE/FE method is compared to that calculated by the traditional finite element (FE) method, and the frequency range of the band gap in the band structure is compared to that of the attenuation in the transmission power spectrum. Numerical results and further analysis demonstrate that a band gap is opened by the coupling between the dominant vibrations of the rubber layer and the matrix modes. In addition, the influences of the geometry parameters on the band gap are studied and understood with the help of the simple “base-spring-mass” model, the influence of the viscidity of rubber layer on the band gap is also investigated.
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
A study of the limitations of linear theory methods as applied to sonic boom calculations
NASA Technical Reports Server (NTRS)
Darden, Christine M.
1990-01-01
Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
NASA Astrophysics Data System (ADS)
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
2013-01-01
Background In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Methods Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. Results The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. Conclusions The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g. census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used. PMID:23964751
Dose calculation of dynamic trajectory radiotherapy using Monte Carlo.
Manser, P; Frauchiger, D; Frei, D; Volken, W; Terribilini, D; Fix, M K
2018-04-06
Using volumetric modulated arc therapy (VMAT) delivery technique gantry position, multi-leaf collimator (MLC) as well as dose rate change dynamically during the application. However, additional components can be dynamically altered throughout the dose delivery such as the collimator or the couch. Thus, the degrees of freedom increase allowing almost arbitrary dynamic trajectories for the beam. While the dose delivery of such dynamic trajectories for linear accelerators is technically possible, there is currently no dose calculation and validation tool available. Thus, the aim of this work is to develop a dose calculation and verification tool for dynamic trajectories using Monte Carlo (MC) methods. The dose calculation for dynamic trajectories is implemented in the previously developed Swiss Monte Carlo Plan (SMCP). SMCP interfaces the treatment planning system Eclipse with a MC dose calculation algorithm and is already able to handle dynamic MLC and gantry rotations. Hence, the additional dynamic components, namely the collimator and the couch, are described similarly to the dynamic MLC by defining data pairs of positions of the dynamic component and the corresponding MU-fractions. For validation purposes, measurements are performed with the Delta4 phantom and film measurements using the developer mode on a TrueBeam linear accelerator. These measured dose distributions are then compared with the corresponding calculations using SMCP. First, simple academic cases applying one-dimensional movements are investigated and second, more complex dynamic trajectories with several simultaneously moving components are compared considering academic cases as well as a clinically motivated prostate case. The dose calculation for dynamic trajectories is successfully implemented into SMCP. The comparisons between the measured and calculated dose distributions for the simple as well as for the more complex situations show an agreement which is generally within 3% of the maximum dose or 3mm. The required computation time for the dose calculation remains the same when the additional dynamic moving components are included. The results obtained for the dose comparisons for simple and complex situations suggest that the extended SMCP is an accurate dose calculation and efficient verification tool for dynamic trajectory radiotherapy. This work was supported by Varian Medical Systems. Copyright © 2018. Published by Elsevier GmbH.
Calculations of proton-binding thermodynamics in proteins.
Beroza, P; Case, D A
1998-01-01
Computational models of proton binding can range from the chemically complex and statistically simple (as in the quantum calculations) to the chemically simple and statistically complex. Much progress has been made in the multiple-site titration problem. Calculations have improved with the inclusion of more flexibility in regard to both the geometry of the proton binding and the larger scale protein motions associated with titration. This article concentrated on the principles of current calculations, but did not attempt to survey their quantitative performance. This is (1) because such comparisons are given in the cited papers and (2) because continued developments in understanding conformational flexibility and interaction energies will be needed to develop robust methods with strong predictive power. Nevertheless, the advances achieved over the past few years should not be underestimated: serious calculations of protonation behavior and its coupling to conformational change can now be confidently pursued against a backdrop of increasing understanding of the strengths and limitations of such models. It is hoped that such theoretical advances will also spur renewed experimental interest in measuring both overall titration curves and individual pKa values or pKa shifts. Exploration of the shapes of individual titration curves (as measured by Hill coefficients and other parameters) would also be useful in assessing the accuracy of computations and in drawing connections to functional behavior.
Evaluation of methods for managing censored results when calculating the geometric mean.
Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M
2018-01-01
Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Engineering topological edge states in two dimensional magnetic photonic crystal
NASA Astrophysics Data System (ADS)
Yang, Bing; Wu, Tong; Zhang, Xiangdong
2017-01-01
Based on a perturbative approach, we propose a simple and efficient method to engineer the topological edge states in two dimensional magnetic photonic crystals. The topological edge states in the microstructures can be constructed and varied by altering the parameters of the microstructure according to the field-energy distributions of the Bloch states at the related Bloch wave vectors. The validity of the proposed method has been demonstrated by exact numerical calculations through three concrete examples. Our method makes the topological edge states "designable."
Dynamics of a parametrically excited simple pendulum
NASA Astrophysics Data System (ADS)
Depetri, Gabriela I.; Pereira, Felipe A. C.; Marin, Boris; Baptista, Murilo S.; Sartorelli, J. C.
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7 π/180 , and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
Dynamics of a parametrically excited simple pendulum.
Depetri, Gabriela I; Pereira, Felipe A C; Marin, Boris; Baptista, Murilo S; Sartorelli, J C
2018-03-01
The dynamics of a parametric simple pendulum submitted to an arbitrary angle of excitation ϕ was investigated experimentally by simulations and analytically. Analytical calculations for the loci of saddle-node bifurcations corresponding to the creation of resonant orbits were performed by applying Melnikov's method. However, this powerful perturbative method cannot be used to predict the existence of odd resonances for a vertical excitation within first order corrections. Yet, we showed that period-3 resonances indeed exist in such a configuration. Two degenerate attractors of different phases, associated with the same loci of saddle-node bifurcations in parameter space, are reported. For tilted excitation, the degeneracy is broken due to an extra torque, which was confirmed by the calculation of two distinct loci of saddle-node bifurcations for each attractor. This behavior persists up to ϕ≈7π/180, and for inclinations larger than this, only one attractor is observed. Bifurcation diagrams were constructed experimentally for ϕ=π/8 to demonstrate the existence of self-excited resonances (periods smaller than three) and hidden oscillations (for periods greater than three).
Dewulf, Bart; Neutens, Tijs; De Weerdt, Yves; Van de Weghe, Nico
2013-08-22
In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g., census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used.
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
General formulation of characteristic time for persistent chemicals in a multimedia environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.
1999-02-01
A simple yet representative method for determining the characteristic time a persistent organic pollutant remains in a multimedia environment is presented. The characteristic time is an important attribute for assessing long-term health and ecological impacts of a chemical. Calculating the characteristic time requires information on decay rates in multiple environmental media as well as the proportion of mass in each environmental medium. The authors explore the premise that using a steady-state distribution of the mass in the environment provides a means to calculate a representative estimate of the characteristic time while maintaining a simple formulation. Calculating the steady-state mass distributionmore » incorporates the effect of advective transport and nonequilibrium effects resulting from the source terms. Using several chemicals, they calculate and compare the characteristic time in a representative multimedia environment for dynamic, steady-state, and equilibrium multimedia models, and also for a single medium model. They demonstrate that formulating the characteristic time based on the steady-state mass distribution in the environment closely approximates the dynamic characteristic time for a range of chemicals and thus can be used in decisions regarding chemical use in the environment.« less
A simple method used to evaluate phase-change materials based on focused-ion beam technique
NASA Astrophysics Data System (ADS)
Peng, Cheng; Wu, Liangcai; Rao, Feng; Song, Zhitang; Lv, Shilong; Zhou, Xilin; Du, Xiaofeng; Cheng, Yan; Yang, Pingxiong; Chu, Junhao
2013-05-01
A nanoscale phase-change line cell based on focused-ion beam (FIB) technique has been proposed to evaluate the electrical property of the phase-change material. Thanks to the FIB-deposited SiO2 hardmask, only one etching step has been used during the fabrication process of the cell. Reversible phase-change behaviors are observed in the line cells based on Al-Sb-Te and Ge-Sb-Te films. The low power consumption of the Al-Sb-Te based cell has been explained by theoretical calculation accompanying with thermal simulation. This line cell is considered to be a simple and reliable method in evaluating the application prospect of a certain phase-change material.
Structural expansions for the ground state energy of a simple metal
NASA Technical Reports Server (NTRS)
Hammerberg, J.; Ashcroft, N. W.
1973-01-01
A structural expansion for the static ground state energy of a simple metal is derived. An approach based on single particle band structure which treats the electron gas as a non-linear dielectric is presented, along with a more general many particle analysis using finite temperature perturbation theory. The two methods are compared, and it is shown in detail how band-structure effects, Fermi surface distortions, and chemical potential shifts affect the total energy. These are of special interest in corrections to the total energy beyond third order in the electron ion interaction, and hence to systems where differences in energies for various crystal structures are exceptionally small. Preliminary calculations using these methods for the zero temperature thermodynamic functions of atomic hydrogen are reported.
The SAMPL4 host-guest blind prediction challenge: an overview.
Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K
2014-04-01
Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.
Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms
Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2010-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022
Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.
Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2011-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
NASA Astrophysics Data System (ADS)
Melin, Junia; Ortiz, J. V.; Martín, I.; Velasco, A. M.; Lavín, C.
2005-06-01
Vertical excitation energies of the Rydberg radical H3O are inferred from ab initio electron propagator calculations on the electron affinities of H3O+. The adiabatic ionization energy of H3O is evaluated with coupled-cluster calculations. These predictions provide optimal parameters for the molecular-adapted quantum defect orbital method, which is used to determine oscillator strengths. Given that the experimental spectrum of H3O does not seem to be available, comparisons with previous calculations are discussed. A simple model Hamiltonian, suitable for the study of bound states with arbitrarily high energies is generated by these means.
Eccentricity and misalignment effects on the performance of high-pressure annular seals
NASA Technical Reports Server (NTRS)
Chen, W. C.; Jackson, E. D.
1985-01-01
Annular pressure seals act as powerful hydrostatic bearings and influence the dynamic characteristics of rotating machinery. This work, using the existing concentric seal theories, provides a simple approximate method for calculation of both seal leakage and the dynamic coefficients for short seals with large eccentricity and/or misalignment of the shaft. Rotation and surface roughness effects are included for leakage and dynamic force calculation. The leakage calculations for both laminar and turbulent flow are compared with experimental results. The dynamic coefficients are compared with analytical results. Excellent agreement between the present work and published results have been observed up to the eccentricitiy ratio of 0.8.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
NASA Astrophysics Data System (ADS)
Iwasaki, M.; Otani, R.; Ito, M.; Kamimura, M.
2016-05-01
We formulate the method of the absorbing boundary condition (ABC) in the coupled-rearrangement-channels variational method (CRCMV) for the three-body problem. In the present study, we handle the simple three-boson system, and the absorbing potential is introduced in the Jacobi coordinate in the individual rearrangement channels. The resonance parameters and the strength of the monopole breakup are compared with the complex scaling method (CSM). We have found that the CRCVM + ABC method nicely works in the threebody problem with the rearrangement channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, D; Badkul, R; Jiang, H
2014-06-01
Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2)more » for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ∼5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.« less
Considerations on methodological challenges for water footprint calculations.
Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R
2012-01-01
We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.
NASA Astrophysics Data System (ADS)
VandeVondele, Joost; Rothlisberger, Ursula
2000-09-01
We present a method for calculating multidimensional free energy surfaces within the limited time scale of a first-principles molecular dynamics scheme. The sampling efficiency is enhanced using selected terms of a classical force field as a bias potential. This simple procedure yields a very substantial increase in sampling accuracy while retaining the high quality of the underlying ab initio potential surface and can thus be used for a parameter free calculation of free energy surfaces. The success of the method is demonstrated by the applications to two gas phase molecules, ethane and peroxynitrous acid, as test case systems. A statistical analysis of the results shows that the entire free energy landscape is well converged within a 40 ps simulation at 500 K, even for a system with barriers as high as 15 kcal/mol.
NASA Technical Reports Server (NTRS)
Fay, John F.
1990-01-01
A calculation is made of the stability of various relaxation schemes for the numerical solution of partial differential equations. A multigrid acceleration method is introduced, and its effects on stability are explored. A detailed stability analysis of a simple case is carried out and verified by numerical experiment. It is shown that the use of multigrids can speed convergence by several orders of magnitude without adversely affecting stability.
NASA Technical Reports Server (NTRS)
Walton, William C., Jr.
1960-01-01
This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.
Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.
2009-01-01
In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1972-01-01
A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.
Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibanez, E.; Milligan, M.
2014-04-01
Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
Shear, principal, and equivalent strains in equal-channel angular deformation
NASA Astrophysics Data System (ADS)
Xia, K.; Wang, J.
2001-10-01
The shear and principal strains involved in equal channel angular deformation (ECAD) were analyzed using a variety of methods. A general expression for the total shear strain calculated by integrating infinitesimal strain increments gave the same result as that from simple geometric considerations. The magnitude and direction of the accumulated principal strains were calculated based on a geometric and a matrix algebra method, respectively. For an intersecting angle of π/2, the maximum normal strain is 0.881 in the direction at π/8 (22.5 deg) from the longitudinal direction of the material in the exit channel. The direction of the maximum principal strain should be used as the direction of grain elongation. Since the principal direction of strain rotates during ECAD, the total shear strain and principal strains so calculated do not have the same meaning as those in a strain tensor. Consequently, the “equivalent” strain based on the second invariant of a strain tensor is no longer an invariant. Indeed, the equivalent strains calculated using the total shear strain and that using the total principal strains differed as the intensity of deformation increased. The method based on matrix algebra is potentially useful in mathematical analysis and computer calculation of ECAD.
Instanton rate constant calculations close to and above the crossover temperature.
McConnell, Sean; Kästner, Johannes
2017-11-15
Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2 + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Evanescent field characteristics of eccentric core optical fiber for distributed sensing.
Liu, Jianxia; Yuan, Libo
2014-03-01
Fundamental core-mode cutoff and evanescent field are considered for an eccentric core optical fiber (ECOF). A method has been proposed to calculate the core-mode cutoff by solving the eigenvalue equations of an ECOF. Using conformal mapping, the asymmetric geometrical structure can be transformed into a simple, easily solved axisymmetric optical fiber with three layers. The variation of the fundamental core-mode cut-off frequency (V(c)) is also calculated with different eccentric distances, wavelengths, core radii, and coating refractive indices. The fractional power of evanescent fields for ECOF is also calculated with the eccentric distances and coating refractive indices. These calculations are necessary to design the structural parameters of an ECOF for long-distance, single-mode distributed evanescent field absorption sensors.
Wang, L; Lovelock, M; Chui, C S
1999-12-01
To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Tight-binding study of stacking fault energies and the Rice criterion of ductility in the fcc metals
NASA Astrophysics Data System (ADS)
Mehl, Michael J.; Papaconstantopoulos, Dimitrios A.; Kioussis, Nicholas; Herbranson, M.
2000-02-01
We have used the Naval Research Laboratory (NRL) tight-binding (TB) method to calculate the generalized stacking fault energy and the Rice ductility criterion in the fcc metals Al, Cu, Rh, Pd, Ag, Ir, Pt, Au, and Pb. The method works well for all classes of metals, i.e., simple metals, noble metals, and transition metals. We compared our results with full potential linear-muffin-tin orbital and embedded atom method (EAM) calculations, as well as experiment, and found good agreement. This is impressive, since the NRL-TB approach only fits to first-principles full-potential linearized augmented plane-wave equations of state and band structures for cubic systems. Comparable accuracy with EAM potentials can be achieved only by fitting to the stacking fault energy.
Bayesian Analysis of Evolutionary Divergence with Genomic Data under Diverse Demographic Models.
Chung, Yujin; Hey, Jody
2017-06-01
We present a new Bayesian method for estimating demographic and phylogenetic history using population genomic data. Several key innovations are introduced that allow the study of diverse models within an Isolation-with-Migration framework. The new method implements a 2-step analysis, with an initial Markov chain Monte Carlo (MCMC) phase that samples simple coalescent trees, followed by the calculation of the joint posterior density for the parameters of a demographic model. In step 1, the MCMC sampling phase, the method uses a reduced state space, consisting of coalescent trees without migration paths, and a simple importance sampling distribution without the demography of interest. Once obtained, a single sample of trees can be used in step 2 to calculate the joint posterior density for model parameters under multiple diverse demographic models, without having to repeat MCMC runs. Because migration paths are not included in the state space of the MCMC phase, but rather are handled by analytic integration in step 2 of the analysis, the method is scalable to a large number of loci with excellent MCMC mixing properties. With an implementation of the new method in the computer program MIST, we demonstrate the method's accuracy, scalability, and other advantages using simulated data and DNA sequences of two common chimpanzee subspecies: Pan troglodytes (P. t.) troglodytes and P. t. verus. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-01-01
A simple modification of the zeroth-order regular approximation (ZORA) in relativistic theory is suggested to suppress its erroneous gauge dependence to a high level of approximation. The method, coined gauge-independent ZORA (ZORA-GI), can be easily installed in any existing nonrelativistic quantum chemical package by programming simple one-electron matrix elements for the quasirelativistic Hamiltonian. Results of benchmark calculations obtained with ZORA-GI at the Hartree-Fock (HF) and second-order Møller-Plesset perturbation theory (MP2) level for dihalogens X2 (X=F,Cl,Br,I,At) are in good agreement with the results of four-component relativistic calculations (HF level) and experimental data (MP2 level). ZORA-GI calculations based on MP2 or coupled-cluster theory with single and double perturbations and a perturbative inclusion of triple excitations [CCSD(T)] lead to accurate atomization energies and molecular geometries for the tetroxides of group VIII elements. With ZORA-GI/CCSD(T), an improved estimate for the atomization energy of hassium (Z=108) tetroxide is obtained.
Ashbaugh, H S; Garde, S; Hummer, G; Kaler, E W; Paulaitis, M E
1999-01-01
Conformational free energies of butane, pentane, and hexane in water are calculated from molecular simulations with explicit waters and from a simple molecular theory in which the local hydration structure is estimated based on a proximity approximation. This proximity approximation uses only the two nearest carbon atoms on the alkane to predict the local water density at a given point in space. Conformational free energies of hydration are subsequently calculated using a free energy perturbation method. Quantitative agreement is found between the free energies obtained from simulations and theory. Moreover, free energy calculations using this proximity approximation are approximately four orders of magnitude faster than those based on explicit water simulations. Our results demonstrate the accuracy and utility of the proximity approximation for predicting water structure as the basis for a quantitative description of n-alkane conformational equilibria in water. In addition, the proximity approximation provides a molecular foundation for extending predictions of water structure and hydration thermodynamic properties of simple hydrophobic solutes to larger clusters or assemblies of hydrophobic solutes. PMID:10423414
Identification of site frequencies from building records
Celebi, M.
2003-01-01
A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.
Mathematical modelling of risk reduction in reinsurance
NASA Astrophysics Data System (ADS)
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Nonlinear analysis of NPP safety against the aircraft attack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk; Králik, Juraj, E-mail: kralik@fa.stuba.sk
The paper presents the nonlinear probabilistic analysis of the reinforced concrete buildings of nuclear power plant under the aircraft attack. The dynamic load is defined in time on base of the airplane impact simulations considering the real stiffness, masses, direction and velocity of the flight. The dynamic response is calculated in the system ANSYS using the transient nonlinear analysis solution method. The damage of the concrete wall is evaluated in accordance with the standard NDRC considering the spalling, scabbing and perforation effects. The simple and detailed calculations of the wall damage are compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Gu, X; Lu, W
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidatemore » and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will provide dosimetrically important label selection strategy in multi-atlas segmentation. CPRIT grant RP150485.« less
Traas, T P; Luttik, R; Jongbloed, R H
1996-08-01
In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.
Lott, B.; Escande, L.; Larsson, S.; ...
2012-07-19
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less
Moments of Inertia of Disks and Spheres without Integration
ERIC Educational Resources Information Center
Hong, Seok-Cheol; Hong, Seok-In
2013-01-01
Calculation of moments of inertia is often challenging for introductory-level physics students due to the use of integration, especially in non-Cartesian coordinates. Methods that do not employ calculus have been described for finding the rotational inertia of thin rods and other simple bodies. In this paper we use the parallel axis theorem and…
NASA Technical Reports Server (NTRS)
Tanimoto, T.
1983-01-01
A simple modification of Gilbert's formula to account for slight lateral heterogeneity of the Earth leads to a convenient formula to calculate synthetic long period seismograms. Partial derivatives are easily calculated, thus the formula is suitable for direct inversion of seismograms for lateral heterogeneity of the Earth.
NASA Astrophysics Data System (ADS)
Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.
2016-07-01
This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.
Orthogonal polynomial projectors for the Projector Augmented Wave (PAW) formalism.
NASA Astrophysics Data System (ADS)
Holzwarth, N. A. W.; Matthews, G. E.; Tackett, A. R.; Dunning, R. B.
1998-03-01
The PAW method for density functional electronic structure calculations developed by Blöchl(Phys. Rev. B 50), 17953 (1994) and also used by our group(Phys. Rev. B 55), 2005 (1997) has numerical advantages of a pseudopotential technique while retaining the physics of an all-electron formalism. We describe a new method for generating the necessary set of atom-centered projector and basis functions, based on choosing the projector functions from a set of orthogonal polynomials multiplied by a localizing weight factor. Numerical benefits of the new scheme result from having direct control of the shape of the projector functions and from the use of a simple repulsive local potential term to eliminate ``ghost state" problems, which can haunt calculations of this kind. We demonstrate the method by calculating the cohesive energies of CaF2 and Mo and the density of states of CaMoO4 which shows detailed agreement with LAPW results over a 66 eV range of energy including upper core, valence, and conduction band states.
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Extending the capability of GYRE to calculate tidally forced stellar oscillations
NASA Astrophysics Data System (ADS)
Guo, Zhao; Gies, Douglas R.
2016-01-01
Tidally forced oscillations have been observed in many eccentric binary systems, such as KOI-54 and many other 'heart beat stars'. The tidal response of the star can be calculated by solving a revised stellar oscillations equations.The open-source stellar oscillation code GYRE (Townsend & Teitler 2013) can be used to solve the free stellar oscillation equations in both adiabatic and non-adiabatic cases. It uses a novel matrix exponential method which avoids many difficulties of the classical shooting and relaxation method. The new version also includes the effect of rotation in traditional approximation.After showing the code flow of GYRE, we revise its subroutines and extend its capability to calculate tidallyforced oscillations in both adiabatic and non-adiabatic cases following the procedure in the CAFein code (Valsecchi et al. 2013). In the end, we compare the tidal eigenfunctions with those calculated from CAFein.More details of the revision and a simple version of the code in MATLAB can be obtained upon request.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
Extension of moment projection method to the fragmentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro
2017-04-15
The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack
2013-11-01
In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.
Interpretation of magnetotelluric resistivity and phase soundings over horizontal layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patella, D.
1976-02-01
The present paper deals with a new inverse method for quantitatively interpreting magnetotelluric apparent resistivity and phase-lag sounding curves over horizontally stratified earth sections. The recurrent character of the general formula relating the wave impedance of an (n-l)-layered medium to that of an n-layered medium suggests the use of the method of reduction to a lower boundary plane, as originally termed by Koefoed in the case of dc resistivity soundings. The layering parameters are so directly derived by a simple iterative procedure. The method is applicable for any number of layers but only when both apparent resistivity and phase-lag soundingmore » curves are jointly available. Moreover no sophisticated algorithm is required: a simple desk electronic calculator together with a sheet of two-layer apparent resistivity and phase-lag master curves are sufficient to reproduce earth sections which, in the range of equivalence, are all consistent with field data.« less
Michelitsch, Astrid; Rittmannsberger, Anna
2003-01-01
A reliable and simple differential pulse polarographic method is described for the determination of thymoquinone in black seed oil. The polarographic behaviour of thymoquinone was examined in various buffer systems over the pH range 5.0-10.0. Thymoquinone is reduced in a single, reversible peak at the dropping mercury electrode. The differential pulse polarogram showed a distinct peak in Sörensen buffer:methanol (3:7, v/v; pH 8.5) at a peak potential of -0.095 V (vs. silver/silver chloride electrode), and a plot of peak height against concentration was found to be linear over the range 0.2-15.0 microg/mL (R = 0.9998). The limit of detection was calculated to be 0.054 microg/mL. The polarographic method has been applied to determine thymoquinone in two black seed oil preparations available on the Austrian pharmaceutical market.
Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)
NASA Astrophysics Data System (ADS)
Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.
2016-08-01
We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.
SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, R; Wang, J; Zhong, H
2016-06-15
Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less
NASA Astrophysics Data System (ADS)
Yahaya, NZ; Ramli, MR; Razak, NNANA; Abbas, Z.
2018-04-01
The Finite Element Method, FEM has been successfully used to model a simple rectangular microstrip sensor to determine the moisture content of Hevea rubber latex. The FEM simulation of sensor and samples was implemented by using COMSOL Multiphysics software. The simulation includes the calculation of magnitude and phase of reflection coefficient and was compared to analytical method. The results show a good agreement in finding the magnitude and phase of reflection coefficient when compared with analytical results. Field distributions of both the unloaded sensor as well as the sensor loaded with different percentages of moisture content were visualized using FEM in conjunction with COMSOL software. The higher the amount of moisture content in the sample the more the electric loops were observed.
Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murch, G.E.; Thorn, R.J.
1978-11-01
A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less
Solving the MHD equations by the space time conservation element and solution element method
NASA Astrophysics Data System (ADS)
Zhang, Moujin; John Yu, S.-T.; Henry Lin, S.-C.; Chang, Sin-Chung; Blankson, Isaiah
2006-05-01
We apply the space-time conservation element and solution element (CESE) method to solve the ideal MHD equations with special emphasis on satisfying the divergence free constraint of magnetic field, i.e., ∇ · B = 0. In the setting of the CESE method, four approaches are employed: (i) the original CESE method without any additional treatment, (ii) a simple corrector procedure to update the spatial derivatives of magnetic field B after each time marching step to enforce ∇ · B = 0 at all mesh nodes, (iii) a constraint-transport method by using a special staggered mesh to calculate magnetic field B, and (iv) the projection method by solving a Poisson solver after each time marching step. To demonstrate the capabilities of these methods, two benchmark MHD flows are calculated: (i) a rotated one-dimensional MHD shock tube problem and (ii) a MHD vortex problem. The results show no differences between different approaches and all results compare favorably with previously reported data.
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2014-12-04
The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, K; Wong, M; Ng, Y
Purpose: Interventional cardiac procedures utilize frequent fluoroscopy and cineangiography, which impose considerable radiation risk to patients, especially pediatric patients. Accurate calculation of effective dose is important in order to estimate cancer risk over the rest of their lifetime. This study evaluates the difference in effective dose calculated by Monte Carlo simulation with those estimated by locally-derived conversion factors (CF-local) and by commonly quoted conversion factors from Karambatsakidou et al (CF-K). Methods: Effective dose (E),of 12 pediatric patients, age between 2.5–19 years old, who had undergone interventional cardiac procedures, were calculated using PCXMC-2.0 software. Tube spectrum, irradiation geometry, exposure parameters andmore » dose-area product (DAP) of each projection were included in the software calculation. Effective doses for each patient were also estimated by two Methods: 1) CF-local: conversion factor derived locally by generalizing results of 12 patients, multiplied by DAP of each patient gives E-local. 2) CF-K: selected factor from above-mentioned literature, multiplied by DAP of each patient gives E-K. Results: Mean of E, E-local and E-K were 16.01 mSv, 16.80 mSv and 22.25 mSv respectively. A deviation of −29.35% to +34.85% between E and E-local, while a greater deviation of −28.96% to +60.86% between E and EK were observed. E-K overestimated the effective dose for patients at age 7.5–19. Conclusion: Effective dose obtained by conversion factors is simple and quick to estimate radiation risk of pediatric patients. This study showed that estimation by CF-local may bear an error of 35% when compared with Monte Carlo calculation. If using conversion factors derived by other studies may result in an even greater error, of up to 60%, due to factors that are not catered for in the estimation, including patient size, projection angles, exposure parameters, tube filtration, etc. Users must be aware of these potential inaccuracies when simple conversion method is employed.« less
Matrix operator theory of radiative transfer. I - Rayleigh scattering.
NASA Technical Reports Server (NTRS)
Plass, G. N.; Kattawar, G. W.; Catchings, F. E.
1973-01-01
An entirely rigorous method for the solution of the equations for radiative transfer based on the matrix operator theory is reviewed. The advantages of the present method are: (1) all orders of the reflection and transmission matrices are calculated at once; (2) layers of any thickness may be combined, so that a realistic model of the atmosphere can be developed from any arbitrary number of layers, each with different properties and thicknesses; (3) calculations can readily be made for large optical depths and with highly anisotropic phase functions; (4) results are obtained for any desired value of the surface albedo including the value unity and for a large number of polar and azimuthal angles; (5) all fundamental equations can be interpreted immediately in terms of the physical interactions appropriate to the problem; and (6) both upward and downward radiance can be calculated at interior points from relatively simple expressions.
NASA Technical Reports Server (NTRS)
Papazian, Peter B.; Perala, Rodney A.; Curry, John D.; Lankford, Alan B.; Keller, J. David
1988-01-01
Using three different current injection methods and a simple voltage probe, transfer impedances for Solid Rocket Motor (SRM) joints, wire meshes, aluminum foil, Thorstrand and a graphite composite motor case were measured. In all cases, the surface current distribution for the particular current injection device was calculated analytically or by finite difference methods. The results of these calculations were used to generate a geometric factor which was the ratio of total injected current to surface current density. The results were validated in several ways. For wire mesh measurements, results showed good agreement with calculated results for a 14 by 18 Al screen. SRM joint impedances were independently verified. The filiment wound case measurement results were validated only to the extent that their curve shape agrees with the expected form of transfer impedance for a homogeneous slab excited by a plane wave source.
A simple and accurate method for calculation of the structure factor of interacting charged spheres.
Wu, Chu; Chan, Derek Y C; Tabor, Rico F
2014-07-15
Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1978-01-01
Sonic boom overpressures and signature duration may be predicted for the entire affected ground area for a wide variety of supersonic airplane configurations and spacecraft operating at altitudes up to 76 km in level flight or in moderate climbing or descending flight paths. The outlined procedure relies to a great extent on the use of charts to provide generation and propagation factors for use in relatively simple expressions for signature calculation. Computational requirements can be met by hand-held scientific calculators, or even by slide rules. A variety of correlations of predicted and measured sonic-boom data for airplanes and spacecraft serve to demonstrate the applicability of the simplified method.
Christensen, P L; Nielsen, J; Kann, T
1992-10-01
A simple procedure for making calibration mixtures of oxygen and the anesthetic gases isoflurane, enflurane, and halothane is described. One to ten grams of the anesthetic substance is evaporated in a closed, 11,361-cc glass bottle filled with oxygen gas at atmospheric pressure. The carefully mixed gas is used to calibrate anesthetic gas monitors. By comparison of calculated and measured volumetric results it is shown that at atmospheric conditions the volumetric behavior of anesthetic gas mixtures can be described with reasonable accuracy using the ideal gas law. A procedure is described for calculating the deviation from ideal gas behavior in cases in which this is needed.
Suzuki, Kimichi; Morokuma, Keiji; Maeda, Satoshi
2017-10-05
We propose a multistructural microiteration (MSM) method for geometry optimization and reaction path calculation in large systems. MSM is a simple extension of the geometrical microiteration technique. In conventional microiteration, the structure of the non-reaction-center (surrounding) part is optimized by fixing atoms in the reaction-center part before displacements of the reaction-center atoms. In this method, the surrounding part is described as the weighted sum of multiple surrounding structures that are independently optimized. Then, geometric displacements of the reaction-center atoms are performed in the mean field generated by the weighted sum of the surrounding parts. MSM was combined with the QM/MM-ONIOM method and applied to chemical reactions in aqueous solution or enzyme. In all three cases, MSM gave lower reaction energy profiles than the QM/MM-ONIOM-microiteration method over the entire reaction paths with comparable computational costs. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A simple method to incorporate water vapor absorption in the 15 microns remote temperature sounding
NASA Technical Reports Server (NTRS)
Dallu, G.; Prabhakara, C.; Conhath, B. J.
1975-01-01
The water vapor absorption in the 15 micron CO2 band, which can affect the remotely sensed temperatures near the surface, are estimated with the help of an empirical method. This method is based on the differential absorption properties of the water vapor in the 11-13 micron window region and does not require a detailed knowledge of the water vapor profile. With this approach Nimbus 4 IRIS radiance measurements are inverted to obtain temperature profiles. These calculated profiles agree with radiosonde data within about 2 C.
Using pyramids to define local thresholds for blob detection.
Shneier, M
1983-03-01
A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.
Parsons, T.; Blakely, R.J.; Brocher, T.M.
2001-01-01
The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels
Jiang, Nan; Ma, Shaochun
2015-01-01
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels. PMID:28793631
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels.
Jiang, Nan; Ma, Shaochun
2015-10-27
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels.
Superconducting critical temperature under pressure
NASA Astrophysics Data System (ADS)
González-Pedreros, G. I.; Baquero, R.
2018-05-01
The present record on the critical temperature of a superconductor is held by sulfur hydride (approx. 200 K) under very high pressure (approx. 56 GPa.). As a consequence, the dependence of the superconducting critical temperature on pressure became a subject of great interest and a high number of papers on of different aspects of this subject have been published in the scientific literature since. In this paper, we calculate the superconducting critical temperature as a function of pressure, Tc(P), by a simple method. Our method is based on the functional derivative of the critical temperature with the Eliashberg function, δTc(P)/δα2F(ω). We obtain the needed coulomb electron-electron repulsion parameter, μ*(P) at each pressure in a consistent way by fitting it to the corresponding Tc using the linearized Migdal-Eliashberg equation. This method requires as input the knowledge of Tc at the starting pressure only. It applies to superconductors for which the Migdal-Eliashberg equations hold. We study Al and β - Sn two weak-coupling low-Tc superconductors and Nb, the strong coupling element with the highest critical temperature. For Al, our results for Tc(P) show an excellent agreement with the calculations of Profeta et al. which are known to agree well with experiment. For β - Sn and Nb, we found a good agreement with the experimental measurements reported in several works. This method has also been applied successfully to PdH elsewhere. Our method is simple, computationally light and gives very accurate results.
Rare-gas impurities in alkali metals: Relation to optical absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meltzer, D.E.; Pinski, F.J.; Stocks, G.M.
1988-04-15
An investigation of the nature of rare-gas impurity potentials in alkali metals is performed. Results of calculations based on simple models are presented, which suggest the possibility of resonance phenomena. These could lead to widely varying values for the exponents which describe the shape of the optical-absorption spectrum at threshold in the Mahan--Nozieres--de Dominicis theory. Detailed numerical calculations are then performed with the Korringa-Kohn-Rostoker coherent-potential-approximation method. The results of these highly realistic calculations show no evidence for the resonance phenomena, and lead to predictions for the shape of the spectra which are in contradiction to observations. Absorption and emission spectramore » are calculated for two of the systems studied, and their relation to experimental data is discussed.« less
Stream-profile analysis and stream-gradient index
Hack, John T.
1973-01-01
The generally regular three-dimensional geometry of drainage networks is the basis for a simple method of terrain analysis providing clues to bedrock conditions and other factors that determine topographic forms. On a reach of any stream, a gradient-index value can be obtained which allows meaningful comparisons of channel slope on streams of different sizes. The index is believed to reflect stream power or competence and is simply the product of the channel slope at a point and channel length measured along the longest stream above the pointwhere the calculation is made. In an adjusted topography, changes in gradient-index values along a stream generally correspond to differences in bedrock or introduced load. In any landscape the gradient index of a stream is related to total relief and stream regimen. Thus, climate, tectonic events, and geomorphic history must be considered in using the gradient index. Gradient-index values can be obtained quickly by simple measurements on topographic maps, or they can be obtained by more sophisticated photogrammetric measurements that involve simple computer calculations from x, y, z coordinates.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.
Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M
2015-03-06
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.
Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'
Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.
2015-01-01
We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468
An Investigation of the Compatibility of Radiation and Convection Heat Flux Measurements
NASA Technical Reports Server (NTRS)
Liebert, Curt H.
1996-01-01
A method for determining time-resolved absorbed surface heat flux and surface temperature in radiation and convection environments is described. The method is useful for verification of aerodynamic, heat transfer and durability models. A practical heat flux gage fabrication procedure and a simple one-dimensional inverse heat conduction model and calculation procedure are incorporated in this method. The model provides an estimate of the temperature and heat flux gradient in the direction of heat transfer through the gage. This paper discusses several successful time-resolved tests of this method in hostile convective heating and cooling environments.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron
NASA Astrophysics Data System (ADS)
Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.
2010-07-01
Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.
NASA Astrophysics Data System (ADS)
Kristyán, Sándor
1997-11-01
In the author's previous work (Chem. Phys. Lett. 247 (1995) 101 and Chem. Phys. Lett. 256 (1996) 229) a simple quasi-linear relationship was introduced between the number of electrons, N, participating in any molecular system and the correlation energy: -0.035 ( N - 1) > Ecorr[hartree] > - 0.045( N -1). This relationship was developed to estimate more accurately correlation energy immediately in ab initio calculations by using the partial charges of atoms in the molecule, easily obtained after Hartree-Fock self-consistent field (HF-SCF) calculations. The method is compared to the well-known B3LYP, MP2, CCSD and G2M methods. Correlation energy estimations for negatively (-1) charged atomic ions are also reported.
Zorębski, Edward; Zorębski, Michał
2014-01-01
The so-called Beyer nonlinearity parameter B/A is calculated for 1,2- and 1,3-propanediol, 1,2-, 1,3-, and 1,4-butanediol, as well as 2-methyl-2,4-pentanediol by means of a thermodynamic method. The calculations are made for temperatures from (293.15 to 318.15) K and pressures up to 100 MPa. The decrease in B/A values with the increasing pressure is observed. In the case of 1,3-butanediol, the results are compared with corresponding literature data. The consistency is very satisfactory. A simple relationship between the internal pressure and B/A nonlinearity parameter has also been studied. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhuo, Shuping; Wei, Jichong; Ju, Guanzhi
The intrapair and interpair correlation energies of MF2 (M = Be, Mg, Ca) set molecules are calculated and analysed, and the transferability of inner core correlation effects of Mδ+ are investigated. A detailed analysis of the comparison of correlation energies of neutral atoms with their corresponding ions of Mδ+ and Fδ-/2 is given in terms of the correlation contribution of this component. The study reveals that the total correlation energy of MF2 molecules can be obtained by summing the correlation contributions of Mδ+ and two Fδ-/2 components. This simple estimation method does shed light on the importance of searching useful means for the calculation of electron correlation energy for large biological systems.
Distributed Parameter Analysis of Pressure and Flow Disturbances in Rocket Propellant Feed Systems
NASA Technical Reports Server (NTRS)
Dorsch, Robert G.; Wood, Don J.; Lightner, Charlene
1966-01-01
A digital distributed parameter model for computing the dynamic response of propellant feed systems is formulated. The analytical approach used is an application of the wave-plan method of analyzing unsteady flow. Nonlinear effects are included. The model takes into account locally high compliances at the pump inlet and at the injector dome region. Examples of the calculated transient and steady-state periodic responses of a simple hypothetical propellant feed system to several types of disturbances are presented. Included are flow disturbances originating from longitudinal structural motion, gimbaling, throttling, and combustion-chamber coupling. The analytical method can be employed for analyzing developmental hardware and offers a flexible tool for the calculation of unsteady flow in these systems.
Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.; Esmaeili, S.
2015-12-01
We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.
IOL calculation using paraxial matrix optics.
Haigis, Wolfgang
2009-07-01
Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
New Method for Solving Inductive Electric Fields in the Ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.
2005-12-01
We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Stationarity conditions for physicochemical processes in the interior ballistics of a gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipanov, A.M.
1995-09-01
An original method is proposed for ensuring time-invariant (stationary) interior ballistic parameters in the postprojectile space of a gun barrel. Stationarity of the parameters is achieved by investing the solid-propellant charge with highly original structures that produce the required pressure condition and linear growth of the projectile velocity. Simple relations are obtained for calculating the principal characteristics.
Use of moments of momentum to predict the crystal habit in potassium hydrogen phthalate
NASA Technical Reports Server (NTRS)
Barber, Patrick G.; Petty, John T.
1990-01-01
A relatively simple calculation of the moments of momentum predicts the morphological order of crystal faces for potassium hydrogen phthalate. The effects on the habit caused by the addition of monomeric, dimeric, and larger aggregates during crystal growth are considered. The first six of the seven observed crystal faces are predicted with this method.
Calculation and visualization of free energy barriers for several VOCs and TNT in HKUST-1.
Sarkisov, Lev
2012-11-28
A simple protocol based on a lattice representation of the porous space is proposed to locate and characterize the free energy bottle-necks in rigid metal organic frameworks. As an illustration we apply this method to HKUST-1 to demonstrate that there are impassable free energy barriers for molecules of trinitrotoluene in this structure.
Spatial correlation of probabilistic earthquake ground motion and loss
Wesson, R.L.; Perkins, D.M.
2001-01-01
Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.
Measuring Viscosities of Gases at Atmospheric Pressure
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini
1987-01-01
Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.
Gauge-independent decoherence models for solids in external fields
NASA Astrophysics Data System (ADS)
Wismer, Michael S.; Yakovlev, Vladislav S.
2018-04-01
We demonstrate gauge-invariant modeling of an open system of electrons in a periodic potential interacting with an optical field. For this purpose, we adapt the covariant derivative to the case of mixed states and put forward a decoherence model that has simple analytical forms in the length and velocity gauges. We demonstrate our methods by calculating harmonic spectra in the strong-field regime and numerically verifying the equivalence of the deterministic master equation to the stochastic Monte Carlo wave-function method.
NASA Technical Reports Server (NTRS)
Tanimoto, T.
1984-01-01
A simple modification of Gilbert's formula to account for slight lateral heterogeneity of the earth leads to a convenient formula to calculate synthetic long period seismograms. Partial derivatives are easily calculated, thus the formula is suitable for direct inversion of seismograms for lateral heterogeneity of the earth. Previously announced in STAR as N83-29893
NASA Astrophysics Data System (ADS)
Tchitchekova, Deyana S.; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-01
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ˜3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
Tchitchekova, Deyana S; Morthomas, Julien; Ribeiro, Fabienne; Ducher, Roland; Perez, Michel
2014-07-21
A novel method for accurate and efficient evaluation of the change in energy barriers for carbon diffusion in ferrite under heterogeneous stress is introduced. This method, called Linear Combination of Stress States, is based on the knowledge of the effects of simple stresses (uniaxial or shear) on these diffusion barriers. Then, it is assumed that the change in energy barriers under a complex stress can be expressed as a linear combination of these already known simple stress effects. The modifications of energy barriers by either uniaxial traction/compression and shear stress are determined by means of atomistic simulations with the Climbing Image-Nudge Elastic Band method and are stored as a set of functions. The results of this method are compared to the predictions of anisotropic elasticity theory. It is shown that, linear anisotropic elasticity fails to predict the correct energy barrier variation with stress (especially with shear stress) whereas the proposed method provides correct energy barrier variation for stresses up to ∼3 GPa. This study provides a basis for the development of multiscale models of diffusion under non-uniform stress.
An evolutionary view of chromatography data systems used in bioanalysis.
McDowall, R D
2010-02-01
This is a personal view of how chromatographic peak measurement and analyte quantification for bioanalysis have evolved from the manual methods of 1970 to the electronic working possible in 2010. In four decades there have been major changes from a simple chart recorder output (that was interpreted and quantified manually) through simple automation of peak measurement, calculation of standard curves and quality control values and instrument control to the networked chromatography data systems of today that are capable of interfacing with Laboratory Information Management Systems and other IT applications. The incorporation of electronic signatures to meet regulatory requirements offers a great opportunity for business improvement and electronic working.
Models for Models: An Introduction to Polymer Models Employing Simple Analogies
NASA Astrophysics Data System (ADS)
Tarazona, M. Pilar; Saiz, Enrique
1998-11-01
An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.
Simple technique to measure toric intraocular lens alignment and stability using a smartphone.
Teichman, Joshua C; Baig, Kashif; Ahmed, Iqbal Ike K
2014-12-01
Toric intraocular lenses (IOLs) are commonly implanted to correct corneal astigmatism at the time of cataract surgery. Their use requires preoperative calculation of the axis of implantation and postoperative measurement to determine whether the IOL has been implanted with the proper orientation. Moreover, toric IOL alignment stability over time is important for the patient and for the longitudinal evaluation of toric IOLs. We present a simple, inexpensive, and precise method to measure the toric IOL axis using a camera-enabled cellular phone (iPhone 5S) and computer software (ImageJ). Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Collapse limit states of reinforced earth retaining walls
NASA Astrophysics Data System (ADS)
Bolton, M. D.; Pang, P. L. R.
The use of systems of earth reinforcement or anchorage is gaining in popularity. It therefore becomes important to assess whether the methods of design which were adopted for such constructions represent valid predictions of realistic limit states. Confidence can only be gained with regard to the effectiveness of limit state criteria if a wide variety of representative limit states were observed. Over 80 centrifugal model tests of simple reinforced earth retaining walls were carried out, with the main purpose of clarifying the nature of appropriate collapse criteria. Collapses due to an insufficiency of friction were shown to be repeatable and therefore subject to fairly simple limit state calculations.
Li, Ming; Zhang, Jingjing; Jiang, Jie; Zhang, Jing; Gao, Jing; Qiao, Xiaolin
2014-04-07
In this paper, a novel approach based on paper spray ionization coupled with ion mobility spectrometry (PSI-IMS) was developed for rapid, in situ detection of cocaine residues in liquid samples and on various surfaces (e.g. glass, marble, skin, wood, fingernails), without tedious sample pretreatment. The obvious advantages of PSI are its low cost, easy operation and simple configuration without using nebulizing gas or discharge gas. Compared with mass spectrometry, ion mobility spectrometry (IMS) takes advantage of its low cost, easy operation, and simple configuration without requiring a vacuum system. Therefore, IMS is a more congruous detection method for PSI in the case of rapid, in situ analysis. For the analysis of cocaine residues in liquid samples, dynamic responses from 5 μg mL(-1) to 200 μg mL(-1) with a linear coefficient (R(2)) of 0.992 were obtained. In this case, the limit of detection (LOD) was calculated to be 2 μg mL(-1) as signal to noise (S/N) was 3 with a relative standard deviation (RSD) of 6.5% for 11 measurements (n = 11). Cocaine residues on various surfaces such as metal, glass, marble, wood, skin, and fingernails were also directly analyzed before wiping the surfaces with a piece of paper. The LOD was calculated to be as low as 5 ng (S/N = 3, RSD = 6.3%, n = 11). This demonstrates the capability of the PSI-IMS method for direct detection of cocaine residues at scenes of cocaine administration. Our results show that PSI-IMS is a simple, sensitive, rapid and economical method for in situ detection of this illicit drug, which could help governments to combat drug abuse.
Do simple screening statistical tools help to detect reporting bias?
Pirracchio, Romain; Resche-Rigon, Matthieu; Chevret, Sylvie; Journois, Didier
2013-09-02
As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). This evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass. Despite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10-6, reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05). Such simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data.
Thermophysical properties of simple liquid metals: A brief review of theory
NASA Technical Reports Server (NTRS)
Stroud, David
1993-01-01
In this paper, we review the current theory of the thermophysical properties of simple liquid metals. The emphasis is on thermodynamic properties, but we also briefly discuss the nonequilibrium properties of liquid metals. We begin by defining a 'simple liquid metal' as one in which the valence electrons interact only weakly with the ionic cores, so that the interaction can be treated by perturbation theory. We then write down the equilibrium Hamiltonian of a liquid metal as a sum of five terms: the bare ion-ion interaction, the electron-electron interaction, the bare electron-ion interaction, and the kinetic energies of electrons and ions. Since the electron-ion interaction can be treated by perturbation, the electronic part contributes in two ways to the Helmholtz free energy: it gives a density-dependent term which is independent of the arrangement of ions, and it acts to screen the ion-ion interaction, giving rise to effective ion-ion pair potentials which are density-dependent, in general. After sketching the form of a typical pair potential, we briefly enumerate some methods for calculating the ionic distribution function and hence the Helmholtz free energy of the liquid: monte Carlo simulations, molecular dynamics simulations, and thermodynamic perturbation theory. The final result is a general expression for the Helmholtz free energy of the liquid metal. It can be used to calculate a wide range of thermodynamic properties of simple metal liquids, which we enumerate. They include not only a range of thermodynamic coefficients of both metals and alloys, but also many aspects of the phase diagram, including freezing curves of pure elements and phase diagrams of liquid alloys (including liquidus and solidus curves). We briefly mention some key discoveries resulting from previous applications of this method, and point out that the same methods work for other materials not normally considered to be liquid metals (such as colloidal suspensions, in which the suspended microspheres behave like ions screened by the salt solution in which they are suspended). We conclude with a brief discussion of some non-equilibrium (i.e., transport) properties which can be treated by an extension of these methods. These include electrical resistivity, thermal conductivity, viscosity, atomic self-diffusion coefficients, concentration diffusion coefficients in alloys, surface tension and thermal emissivity. Finally, we briefly mention two methods by which the theory might be extended to non-simple liquid metals: these are empirical techniques (i.e., empirical two- and three-body potentials), and numerical many-body approaches. Both may be potentially applicable to extremely complex systems, such as nonstoichiometric liquid semiconductor alloys.
Adulteration of Ginkgo biloba products and a simple method to improve its detection.
Wohlmuth, Hans; Savage, Kate; Dowell, Ashley; Mouatt, Peter
2014-05-15
Extracts of ginkgo (Ginkgo biloba) leaf are widely available worldwide in herbal medicinal products, dietary supplements, botanicals and complementary medicines, and several pharmacopoeias contain monographs for ginkgo leaf, leaf extract and finished products. Being a high-value botanical commodity, ginkgo extracts may be the subject of economically motivated adulteration. We analysed eight ginkgo leaf retail products purchased in Australia and Denmark and found compelling evidence of adulteration with flavonol aglycones in three of these. The same three products also contained genistein, an isoflavone that does not occur in ginkgo leaf. Although the United States Pharmacopeia - National Formulary (USP-NF) and the British and European Pharmacopoeias stipulate a required range for flavonol glycosides in ginkgo extract, the prescribed assays quantify flavonol aglycones. This means that these pharmacopoeial methods are not capable of detecting adulteration of ginkgo extract with free flavonol aglycones. We propose a simple modification of the USP-NF method that addresses this problem: by assaying for flavonol aglycones pre and post hydrolysis the content of flavonol glycosides can be accurately estimated via a simple calculation. We also recommend a maximum limit be set for free flavonol aglycones in ginkgo extract. Copyright © 2014 Elsevier GmbH. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoist, P.
The calculation of diffusion coefficients in a lattice necessitates the knowledge of a correct method of weighting the free paths of the different constituents. An unambiguous definition of this weighting method is given here, based on the calculation of leakages from a zone of a reactor. The formulation obtained, which is both simple and general, reduces the calculation of diffusion coefficients to that of collision probabilities in the different media; it reveals in the expression for the radial coefficient the series of the terms of angular correlation (cross terms) recently shown by several authors. This formulation is then used tomore » calculate the practical case of a classical type of lattice composed of a moderator and a fuel element surrounded by an empty space. Analytical and numerical comparison of the expressions obtained with those inferred from the theory of BEHRENS shows up the importance of several new terms some of which are linked with the transparency of the fuel element. Cross terms up to the second order are evaluated. A practical formulary is given at the end of the paper. (author) [French] Le calcul des coefficients de diffusion dans un reseau suppose la connaissance d'un mode de ponderation correct des libres parcours des differents constituants. On definit ici sans ambiguite ce mode de ponderation a partir du calcul des fuites hors d'une zone de reacteur. La formulation obtenue, simple et generale, ramene le calcul des coefficients de diffusion a celui des probabilites de collision dans les differents milieux; elle fait apparaitre dans l'expression du coefficient radial la serie des termes de correlation angulaire (termes rectangles), mis en evidence recemment par plusieurs auteurs. Cette formulation est ensuite appliquee au calcul pratique d'un reseau classique, compose d'un moderateur et d'un element combustible entoure d'une cavite; la comparaison analytique et numerique des expressions obtenues avec celles deduites de la theorie de BEHRENS fait apparaitre l'importance de plusieurs termes nouveaux, dont certains sont lies a la transparence de l'element combustible; les termes rectangles sont calcules jusqu'a l'ordre 2. Un formulaire pratique est donne a la fin de cette etude. (auteur)« less
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
O’Brien, Denzil
2016-01-01
Simple Summary This paper examines a number of methods for calculating injury risk for riders in the equestrian sport of eventing, and suggests that the primary locus of risk is the action of the horse jumping, and the jump itself. The paper argues that risk calculation should therefore focus first on this locus. Abstract All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps. PMID:26891334
CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM
NASA Astrophysics Data System (ADS)
Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang
2014-06-01
Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.
Jank, Louise; Martins, Magda Targa; Arsand, Juliana Bazzan; Campos Motta, Tanara Magalhães; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara
2015-11-01
A fast and simple method for residue analysis of the antibiotics classes of macrolides (erythromycin, azithromycin, tylosin, tilmicosin and spiramycin) and lincosamides (lincomycin and clindamycin) was developed and validated for cattle, swine and chicken muscle and for bovine milk. Sample preparation consists in a liquid-liquid extraction (LLE) with acetonitrile, followed by liquid chromatography-electrospray-tandem mass spectrometry analysis (LC-ESI-MS/MS), without the need of any additional clean-up steps. Chromatographic separation was achieved using a C18 column and a mobile phase composed by acidified acetonitrile and water. The method was fully validated according the criteria of the Commission Decision 2002/657/EC. Validation parameters such as limit of detection, limit of quantification, linearity, accuracy, repeatability, specificity, reproducibility, decision limit (CCα) and detection capability (CCβ) were evaluated. All calculated values met the established criteria. Reproducibility values, expressed as coefficient of variation, were all lower than 19.1%. Recoveries range from 60% to 107%. Limits of detection were from 5 to 25 µg kg(-1).The present method is able to be applied in routine analysis, with adequate time of analysis, low cost and a simple sample preparation protocol. Copyright © 2015. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Lorenz-Meyer, W.
1977-01-01
In connection with the question on the applicability of test results obtained from cryogenic wind tunnels to the large-scale model the similarity parameter is referred to. A simple method is given for calculating the similarity parameter. From the numerical values obtained it can be deduced that nitrogen behaves practically like an ideal gas when it is close to the saturation point and in a pressure range up to 4 bar. The influence of this parameter on the pressure distribution of a supercritical profile confirms this finding.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Recipes for free energy calculations in biomolecular systems.
Moradi, Mahmoud; Babin, Volodymyr; Sagui, Celeste; Roland, Christopher
2013-01-01
During the last decade, several methods for sampling phase space and calculating various free energies in biomolecular systems have been devised or refined for molecular dynamics (MD) simulations. Thus, state-of-the-art methodology and the ever increasing computer power allow calculations that were forbidden a decade ago. These calculations, however, are not trivial as they require knowledge of the methods, insight into the system under study, and, quite often, an artful combination of different methodologies in order to avoid the various traps inherent in an unknown free energy landscape. In this chapter, we illustrate some of these concepts with two relatively simple systems, a sugar ring and proline oligopeptides, whose free energy landscapes still offer considerable challenges. In order to explore the configurational space of these systems, and to surmount the various free energy barriers, we combine three complementary methods: a nonequilibrium umbrella sampling method (adaptively biased MD, or ABMD), replica-exchange molecular dynamics (REMD), and steered molecular dynamics (SMD). In particular, ABMD is used to compute the free energy surface of a set of collective variables; REMD is used to improve the performance of ABMD, to carry out sampling in space complementary to the collective variables, and to sample equilibrium configurations directly; and SMD is used to study different transition mechanisms.
Exposure assessment in health assessments for hand-arm vibration syndrome.
Mason, H J; Poole, K; Young, C
2011-08-01
Assessing past cumulative vibration exposure is part of assessing the risk of hand-arm vibration syndrome (HAVS) in workers exposed to hand-arm vibration and invariably forms part of a medical assessment of such workers. To investigate the strength of relationships between the presence and severity of HAVS and different cumulative exposure metrics obtained from a self-reporting questionnaire. Cumulative exposure metrics were constructed from a tool-based questionnaire applied in a group of HAVS referrals and workplace field studies. These metrics included simple years of vibration exposure, cumulative total hours of all tool use and differing combinations of acceleration magnitudes for specific tools and their daily use, including the current frequency-weighting method contained in ISO 5349-1:2001. Use of simple years of exposure is a weak predictor of HAVS or its increasing severity. The calculation of cumulative hours across all vibrating tools used is a more powerful predictor. More complex calculations based on involving likely acceleration data for specific classes of tools, either frequency weighted or not, did not offer a clear further advantage in this dataset. This may be due to the uncertainty associated with workers' recall of their past tool usage or the variability between tools in the magnitude of their vibration emission. Assessing years of exposure or 'latency' in a worker should be replaced by cumulative hours of tool use. This can be readily obtained using a tool-pictogram-based self-reporting questionnaire and a simple spreadsheet calculation.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2008-11-01
The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.
NASA Technical Reports Server (NTRS)
Emrich, Bill
2006-01-01
A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.
A simple method to calculate first-passage time densities with arbitrary initial conditions
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig
2016-06-01
Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.
A SENSITIVE METHOD FOR THE DETERMINATION OF CARBOXYHAEMOGLOBIN IN A FINGER PRICK SAMPLE OF BLOOD
Commins, B. T.; Lawther, P. J.
1965-01-01
About 0·01 ml. of blood taken from a finger prick is dissolved in 10 ml. of 0·04% ammonia solution. The solution is divided into two halves, and oxygen is bubbled through one half to convert any carboxyhaemoglobin into oxyhaemoglobin. The spectra of the two halves are then compared in a spectrophotometer, and the difference between them is used to estimate the carboxyhaemoglobin content of the blood either graphically or by calculation from a simple formula. Calibration is simple and need only be done once. A sample of blood can be analysed in about 20 minutes, which includes the time to collect the sample. The method is sensitive enough to be used for the analysis of solutions of blood containing less than 1% carboxyhaemoglobin. PMID:14278801
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
Wave vector modification of the infinite order sudden approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachs, J.G.; Bowman, J.M.
1980-10-15
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less
Wave vector modification of the infinite order sudden approximation
NASA Astrophysics Data System (ADS)
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
A new exact method for line radiative transfer
NASA Astrophysics Data System (ADS)
Elitzur, Moshe; Asensio Ramos, Andrés
2006-01-01
We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.
Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations
Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán
2016-01-01
Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik
2015-09-15
Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less
Extremely simple holographic projection of color images
NASA Astrophysics Data System (ADS)
Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej
2012-03-01
A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).
NASA Technical Reports Server (NTRS)
Defacio, B.; Vannevel, Alan; Brander, O.
1993-01-01
A formulation is given for a collection of phonons (sound) in a fluid at a non-zero temperature which uses the simple harmonic oscillator twice; one to give a stochastic thermal 'noise' process and the other which generates a coherent Glauber state of phonons. Simple thermodynamic observables are calculated and the acoustic two point function, 'contrast' is presented. The role of 'coherence' in an equilibrium system is clarified by these results and the simple harmonic oscillator is a key structure in both the formulation and the calculations.
Heat Transfer to Surfaces of Finite Catalytic Activity in Frozen Dissociated Hypersonic Flow
NASA Technical Reports Server (NTRS)
Chung, Paul M.; Anderson, Aemer D.
1961-01-01
The heat transfer due to catalytic recombination of a partially dissociated diatomic gas along the surfaces of two-dimensional and axisymmetric bodies with finite catalytic efficiencies is studied analytically. An integral method is employed resulting in simple yet relatively complete solutions for the particular configurations considered. A closed form solution is derived which enables one to calculate atom mass-fraction distribution, therefore catalytic heat transfer distribution, along the surface of a flat plate in frozen compressible flow with and without transpiration. Numerical calculations are made to determine the atom mass-fraction distribution along an axisymmetric conical body with spherical nose in frozen hypersonic compressible flow. A simple solution based on a local similarity concept is found to be in good agreement with these numerical calculations. The conditions are given for which the local similarity solution is expected to be satisfactory. The limitations on the practical application of the analysis to the flight of the blunt bodies in the atmosphere are discussed. The use of boundary-layer theory and the assumption of frozen flow restrict application of the analysis to altitudes between about 150,000 and 250,000 feet.
A simple method for the extraction and identification of light density microplastics from soil.
Zhang, Shaoliang; Yang, Xiaomei; Gertsen, Hennie; Peters, Piet; Salánki, Tamás; Geissen, Violette
2018-03-01
This article introduces a simple and cost-saving method developed to extract, distinguish and quantify light density microplastics of polyethylene (PE) and polypropylene (PP) in soil. A floatation method using distilled water was used to extract the light density microplastics from soil samples. Microplastics and impurities were identified using a heating method (3-5s at 130°C). The number and size of particles were determined using a camera (Leica DFC 425) connected to a microscope (Leica wild M3C, Type S, simple light, 6.4×). Quantification of the microplastics was conducted using a developed model. Results showed that the floatation method was effective in extracting microplastics from soils, with recovery rates of approximately 90%. After being exposed to heat, the microplastics in the soil samples melted and were transformed into circular transparent particles while other impurities, such as organic matter and silicates were not changed by the heat. Regression analysis of microplastics weight and particle volume (a calculation based on image J software analysis) after heating showed the best fit (y=1.14x+0.46, R 2 =99%, p<0.001). Recovery rates based on the empirical model method were >80%. Results from field samples collected from North-western China prove that our method of repetitive floatation and heating can be used to extract, distinguish and quantify light density polyethylene microplastics in soils. Microplastics mass can be evaluated using the empirical model. Copyright © 2017 Elsevier B.V. All rights reserved.
Nguyen, Phuong H; Derreumaux, Philippe
2012-01-14
One challenge in computational biophysics and biology is to develop methodologies able to estimate accurately the configurational entropy of macromolecules. Among many methods, the quasiharmonic approximation (QH) is most widely used as it is simple in both theory and implementation. However, it has been shown that this method becomes inaccurate by overestimating entropy for systems with rugged free energy landscapes. Here, we propose a simple method to improve the QH approximation, i.e., to reduce QH entropy. We approximate the potential energy landscape of the system by an effective harmonic potential, and request that this potential must produce exactly the configurational temperature of the system. Due to this constraint, the force constants associated with the effective harmonic potential are increased, or equivalently, entropy of motion governed by this effective harmonic potential is reduced. We also introduce the effective configurational temperature concept which can be used as an indicator to check the anharmonicity of the free energy landscape. To validate the new method we compare it with the recently developed expansion approximate method by calculating entropy of one simple model system and two peptides with 3 and 16 amino acids either in gas phase or in explicit solvent. We show that the new method appears to be a good choice in practice as it is a compromise between accuracy and computational speed. A modification of the expansion approximate method is also introduced and advantages are discussed in some detail.
Acoustic band gaps of the woodpile sonic crystal with the simple cubic lattice
NASA Astrophysics Data System (ADS)
Wu, Liang-Yu; Chen, Lien-Wen
2011-02-01
This study theoretically and experimentally investigates the acoustic band gap of a three-dimensional woodpile sonic crystal. Such crystals are built by blocks or rods that are orthogonally stacked together. The adjacent layers are perpendicular to each other. The woodpile structure is embedded in air background. Their band structures and transmission spectra are calculated using the finite element method with a periodic boundary condition. The dependence of the band gap on the width of the stacked rods is discussed. The deaf bands in the band structure are observed by comparing with the calculated transmission spectra. The experimental transmission spectra for the Γ-X and Γ-X' directions are also presented. The calculated results are compared with the experimental results.
Aerodynamic heating and surface temperatures on vehicles for computer-aided design studies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.; Kania, L. A.; Chitty, A.
1983-01-01
A computer subprogram has been developed to calculate aerodynamic and radiative heating rates and to determine surface temperatures by integrating the heating rates along the trajectory of a vehicle. Convective heating rates are calculated by applying the axisymmetric analogue to inviscid surface streamlines and using relatively simple techniques to calculate laminar, transitional, or turbulent heating rates. Options are provided for the selection of gas model, transition criterion, turbulent heating method, Reynolds Analogy factor, and entropy-layer swallowing effects. Heating rates are compared to experimental data, and the time history of surface temperatures are given for a high-speed trajectory. The computer subprogram is developed for preliminary design and mission analysis where parametric studies are needed at all speeds.
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Peng, Lian-Xin; Wang, Jing-Bo; Hu, Li-Xue; Zhao, Jiang-Lin; Xiang, Da-Bing; Zou, Liang; Zhao, Gang
2013-01-30
A simple and rapid method for determining emodin, an active factor presented in tartary buckwheat (Fagopyrum tataricum), by high-performance liquid chromatography coupled to a diode array detector (HPLC-DAD) has been developed. Emodin was separated from an extract of buckwheat on a Kromasil-ODS C(18) (250 mm × 4.6 mm × 5 μm) column. The separation is achieved within 15 min on the ODS column. Emodin can be quantified using an external standard method detecting at 436 nm. Good linearity is obtained with a correlation coefficient exceeding 0.9992. The limit of detection and the limit of quantification are 5.7 and 19 μg/L, respectively. This method shows good reproducibility for the quantification of the emodin with a relative standard deviation value of 4.3%. Under optimized extraction conditions, the recovery of emodin was calculated as >90%. The validated method is successfully applied to quantify the emodin in tartary buckwheat and its products.
Gradient optimization of finite projected entangled pair states
NASA Astrophysics Data System (ADS)
Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin
2017-05-01
Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.
NASA Technical Reports Server (NTRS)
Bhandari, P.; Wu, Y. C.; Roschke, E. J.
1989-01-01
A simple solar flux calculation algorithm for a cylindrical cavity type solar receiver has been developed and implemented on an IBM PC-AT. Using cone optics, the contour error method is utilized to handle the slope error of a paraboloidal concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be esily extended to other axisymmetric receiver geometries.
Maloney, Andrew G. P.; Wood, Peter A.
2016-01-01
PIXEL has been used to perform calculations of adsorbate-adsorbent interaction energies between a range of metal–organic frameworks (MOFs) and simple guest molecules. Interactions have been calculated for adsorption between MOF-5 and Ar, H2, and N2; Zn2(BDC)2(TED) (BDC = 1,4-benzenedicarboxylic acid, TED = triethylenediamine) and H2; and HKUST-1 and CO2. The locations of the adsorption sites and the calculated energies, which show differences in the Coulombic or dispersion characteristic of the interaction, compare favourably to experimental data and literature energy values calculated using density functional theory. PMID:28496380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, J; Zhang, Y; Zheng, Y
2015-06-15
Purpose: Spine hardware made of high-Z materials such as titanium has the potential to affect the dose distribution around the metal rods in CyberKnife spinal stereotactic radiosurgery (SRS) treatments. The purpose of this work was to evaluate the magnitude of such effect retrospectively for clinical CyberKnife plans. Methods: The dose calculation was performed within the MultiPlan treatment planning system using the ray tracing (RT) and Monte Carlo (MC) method. A custom density model was created by extending the CT-to-Density table to titanium density of 4.5 g/cm3 with the CT number of 4095. To understand the dose perturbation caused by themore » titanium rod, a simple beam setup (7.5 mm IRIS collimator) was used to irradiate a mimic rod (5 mm) with overridden high density. Five patient spinal SRS cases were found chronologically from 2010 to 2015 in our institution. For each case, the hardware was contoured manually. The original plan was re-calculated using both RT and MC methods with and without rod density override without changing clinical beam parameters. Results: The simple beam irradiation shows that there is 10% dose increase at the interface because of electron backscattering and 7% decrease behind the rod because of photon attenuation. For actual clinical plans, the iso-dose lines and DVHs are almost identical (<2%) for calculations with and without density override for both RT and MC methods. However, there is a difference of more than 10% for D90 between RT and MC method. Conclusion: Although the dose perturbation around the metal rods can be as large as 10% for a single beam irradiation, for clinical treatments with complex beam composition the effect of spinal hardware to the PTV and spinal dose is minimal. As such, the MC dose algorithm without rod density override for CyberKnife spinal SRS is acceptable.« less
NASA Astrophysics Data System (ADS)
Johari, A. H.; Muslim
2018-05-01
Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.
Taimoory, S Maryamdokht; Sadraei, S Iraj; Fayoumi, Rose Anne; Nasri, Sarah; Revington, Matthew; Trant, John F
2018-04-20
The reaction between furans and maleimides has increasingly become a method of interest as its reversibility makes it a useful tool for applications ranging from self-healing materials, to self-immolative polymers, to hydrogels for cell culture and for the preparation of bone repair. However, most of these applications have relied on simple monosubstituted furans and simple maleimides and have not extensively evaluated the potential thermal variability inherent in the process that is achievable through simple substrate modification. A small library of cycloadducts suitable for the above applications was prepared, and the temperature dependence of the retro-Diels-Alder processes was determined through in situ 1 H NMR analyses complemented by computational calculations. The practical range of the reported systems ranges from 40 to >110 °C. The cycloreversion reactions are more complex than would be expected based on simple trends expected based on frontier molecular orbital analyses of the materials.
Gjerde, Hallvard; Verstraete, Alain
2010-02-25
To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Comparative study of the pentamodal property of four potential pentamode microstructures
NASA Astrophysics Data System (ADS)
Huang, Yan; Lu, Xuegang; Liang, Gongying; Xu, Zhuo
2017-03-01
In this paper, a numerical comparative study is presented on the pentamodal property of four potential pentamode microstructures (three based on simple cubic and one on body-centered cubic structures) based on phonon band calculations. The finite-element method is employed to calculate the band structures, and the two essential factors of the ratio of bulk modulus B to shear modulus G and the single-mode band gap (SBG) are analyzed to quantitatively evaluate the pentamodal property. The results show that all four structures possess a higher B/G ratio than traditional materials. One of the simple cubic structures exhibits the incomplete SBG, while the three other structures exhibit complete SBG to decouple the compression and shear waves in all propagation directions. Further parametric analyses are presented investigating the effects of geometrical and material parameters on the pentamodal property of these structures. This study provides guidelines for the future design of novel pentamode microstructures possessing a high B/G ratio and a low-frequency broadband SBG.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
The optical and structural properties of graphene nanosheets and tin oxide nanocrystals composite
NASA Astrophysics Data System (ADS)
Farheen, Parveen, Azra; Azam, Ameer
2018-05-01
A nanocomposite material consisting of metal oxide and reduced graphene oxide was prepared via simple, economic, and effective chemical reduction method. The synthesis strategy was based on the reduction of GO with Sn2+ ion that combines tin oxidation and GO reduction in one step, which provides a simple, low-cost and effective way to prepare graphene nanosheets/SnO2 nanocrystals composites because no additional chemicals were needed. SEM and TEM images shows the uniform distribution of the SnO2 nanocrystals on the Graphene nanosheets (GNs) surface and transmission electron microscope shows an average particle size of 2-4 nm. The mean crystallite size was calculated by Debye Scherrer formula and was found to be about 4.0 nm. Optical analysis was done by using UV-Visible spectroscopy technique and the band gap energy of the GNs/SnO2 nanocomposite was calculated by Tauc relation and came out to be 3.43eV.
Decision support system of e-book provider selection for library using Simple Additive Weighting
NASA Astrophysics Data System (ADS)
Ciptayani, P. I.; Dewi, K. C.
2018-01-01
Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan
2014-07-01
Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
NASA Astrophysics Data System (ADS)
Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc
2016-12-01
Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.
NASA Astrophysics Data System (ADS)
Dinh, Thanh Vu; Cabon, Béatrice; Daoud, Nahla; Chilo, Jean
1992-11-01
This paper presents a simple and efficient method for calculating the propagating line parameters (actually, a microstrip one) and its magnetic fields, by simulating an original equivalent circuit with an electrical nodal simulator (SPICE). The losses in the normal conducting line (due to DC losses and to skin effect losses) and also in the superconducting one can be investigated. This allows us to integrate the electromagnetic solutions to the CAD softwares. Dans ce papier, une méthode simple et efficace pour calculer les paramètres de propagation d'une ligne microruban et les champs magnétiques qu'elle engendre est présentée; pour cela, nous simulons un circuit original équivalent à l'aide du simulateur nodal SPICE. Les pertes dans une ligne conductrice (pertes continues et par effet de peau) ainsi que dans une ligne supraconductrice peuvent être considérées. Les solutions électromagnétiques peuvent être intégrées dans les simulateurs de CAO.
On HPM approximation for the perihelion precession angle in general relativity
NASA Astrophysics Data System (ADS)
Shchigolev, Victor; Bezbatko, Dmitrii
2017-03-01
In this paper, the homotopy perturbation method (HPM) is applied for calculating the perihelion precession angle of planetary orbits in General Relativity. The HPM is quite efficient and is practically well suited for use in many astrophysical and cosmological problems. For our purpose, we applied HPM to the approximate solutions for the orbits in order to calculate the perihelion shift. On the basis of the main idea of HPM, we construct the appropriate homotopy that leads to the problem of solving the set of linear algebraic equations. As a result, we obtain a simple formula for the angle of precession avoiding any restrictions on the smallness of physical parameters. First of all, we consider the simple examples of the Schwarzschild metric and the Reissner - Nordström spacetime of a charged star for which the approximate geodesics solutions are known. Furthermore, the implementation of HPM has allowed us to readily obtain the precession angle for the orbits in the gravitational field of Kiselev black hole.
Simplified analysis about horizontal displacement of deep soil under tunnel excavation
NASA Astrophysics Data System (ADS)
Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin
2017-11-01
Most of the domestic scholars focus on the study about the law of the soil settlement caused by subway tunnel excavation, however, studies on the law of horizontal displacement are lacking. And it is difficult to obtain the horizontal displacement data of any depth in the project. At present, there are many formulas for calculating the settlement of soil layers. In terms of integral solutions of Mindlin classic elastic theory, stochastic medium theory, source-sink theory, the Peck empirical formula is relatively simple, and also has a strong applicability at home. Considering the incompressibility of rock and soil mass, based on the principle of plane strain, the calculation formula of the horizontal displacement of the soil along the cross section of the tunnel was derived by using the Peck settlement formula. The applicability of the formula is verified by comparing with the existing engineering cases, a simple and rapid analytical method for predicting the horizontal displacement is presented.
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...
2017-06-01
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.
1982-01-01
Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.
Infrared zone-scanning system.
Belousov, Aleksandr; Popov, Gennady
2006-03-20
Challenges encountered in designing an infrared viewing optical system that uses a small linear detector array based on a zone-scanning approach are discussed. Scanning is performed by a rotating refractive polygon prism with tilted facets, which, along with high-speed line scanning, makes the scanning gear as simple as possible. A method of calculation of a practical optical system to compensate for aberrations during prism rotation is described.
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module
NASA Astrophysics Data System (ADS)
Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge
2017-04-01
A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.
1994-01-01
Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.
Localized diabatization applied to excitons in molecular crystals
NASA Astrophysics Data System (ADS)
Jin, Zuxin; Subotnik, Joseph E.
2017-06-01
Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localized diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. These methods should be very useful for studying energy transfer through solids with ab initio calculations.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Preliminary design methods for fiber reinforced composite structures employing a personal computer
NASA Technical Reports Server (NTRS)
Eastlake, C. N.
1986-01-01
The objective of this project was to develop a user-friendly interactive computer program to be used as an analytical tool by structural designers. Its intent was to do preliminary, approximate stress analysis to help select or verify sizing choices for composite structural members. The approach to the project was to provide a subroutine which uses classical lamination theory to predict an effective elastic modulus for a laminate of arbitrary material and ply orientation. This effective elastic modulus can then be used in a family of other subroutines which employ the familiar basic structural analysis methods for isotropic materials. This method is simple and convenient to use but only approximate, as is appropriate for a preliminary design tool which will be subsequently verified by more sophisticated analysis. Additional subroutines have been provided to calculate laminate coefficient of thermal expansion and to calculate ply-by-ply strains within a laminate.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
Cosmological perturbation theory using the FFTLog: formalism and connection to QFT loop integrals
NASA Astrophysics Data System (ADS)
Simonović, Marko; Baldauf, Tobias; Zaldarriaga, Matias; Carrasco, John Joseph; Kollmeier, Juna A.
2018-04-01
We present a new method for calculating loops in cosmological perturbation theory. This method is based on approximating a ΛCDM-like cosmology as a finite sum of complex power-law universes. The decomposition is naturally achieved using an FFTLog algorithm. For power-law cosmologies, all loop integrals are formally equivalent to loop integrals of massless quantum field theory. These integrals have analytic solutions in terms of generalized hypergeometric functions. We provide explicit formulae for the one-loop and the two-loop power spectrum and the one-loop bispectrum. A chief advantage of our approach is that the difficult part of the calculation is cosmology independent, need be done only once, and can be recycled for any relevant predictions. Evaluation of standard loop diagrams then boils down to a simple matrix multiplication. We demonstrate the promise of this method for applications to higher multiplicity/loop correlation functions.
Calculation of smooth potential energy surfaces using local electron correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-14
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
Gutzwiller renormalization group
Lanatà, Nicola; Yao, Yong -Xin; Deng, Xiaoyu; ...
2016-01-06
We develop a variational scheme called the “Gutzwiller renormalization group” (GRG), which enables us to calculate the ground state of Anderson impurity models (AIM) with arbitrary numerical precision. Our method exploits the low-entanglement property of the ground state of local Hamiltonians in combination with the framework of the Gutzwiller wave function and indicates that the ground state of the AIM has a very simple structure, which can be represented very accurately in terms of a surprisingly small number of variational parameters. Furthermore, we perform benchmark calculations of the single-band AIM that validate our theory and suggest that the GRG mightmore » enable us to study complex systems beyond the reach of the other methods presently available and pave the way to interesting generalizations, e.g., to nonequilibrium transport in nanostructures.« less
A statistical method to estimate low-energy hadronic cross sections
NASA Astrophysics Data System (ADS)
Balassa, Gábor; Kovács, Péter; Wolf, György
2018-02-01
In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.
Minimum current principle and variational method in theory of space charge limited flow
NASA Astrophysics Data System (ADS)
Rokhlenko, A.
2015-10-01
In spirit of the principle of least action, which means that when a perturbation is applied to a physical system, its reaction is such that it modifies its state to "agree" with the perturbation by "minimal" change of its initial state. In particular, the electron field emission should produce the minimum current consistent with boundary conditions. It can be found theoretically by solving corresponding equations using different techniques. We apply here the variational method for the current calculation, which can be quite effective even when involving a short set of trial functions. The approach to a better result can be monitored by the total current that should decrease when we on the right track. Here, we present only an illustration for simple geometries of devices with the electron flow. The development of these methods can be useful when the emitter and/or anode shapes make difficult the use of standard approaches. Though direct numerical calculations including particle-in-cell technique are very effective, but theoretical calculations can provide an important insight for understanding general features of flow formation and even sometimes be realized by simpler routines.
Calculation of Compressible Flows past Aerodynamic Shapes by Use of the Streamline Curvature
NASA Technical Reports Server (NTRS)
Perl, W
1947-01-01
A simple approximate method is given for the calculation of isentropic irrotational flows past symmetrical airfoils, including mixed subsonic-supersonic flows. The method is based on the choice of suitable values for the streamline curvature in the flow field and the subsequent integration of the equations of motion. The method yields limiting solutions for potential flow. The effect of circulation is considered. A comparison of derived velocity distributions with existing results that are based on calculation to the third order in the thickness ratio indicated satisfactory agreement. The results are also presented in the form of a set of compressibility correction rules that lie between the Prandtl-Glauert rule and the von Karman-Tsien rule (approximately). The different rules correspond to different values of the local shape parameter square root sign YC sub a, in which Y is the ordinate and C sub a is the curvature at a point on an airfoil. Bodies of revolution, completely supersonic flows, and the significance of the limiting solutions for potential flow are also briefly discussed.
Spectral parameters and Hamaker constants of silicon hydride compounds and organic solvents.
Masuda, Takashi; Matsuki, Yasuo; Shimoda, Tatsuya
2009-12-15
Cyclopentasilane (CPS) and polydihydrosilane, which consist of hydrogen and silicon only, are unique materials that can be used to produce intrinsic silicon film in a liquid process, such as spin coating or an ink-jet method. Wettability and solubility of general organic solvents including the above can be estimated by Hamaker constants, which are calculated according to the Lifshitz theory. In order to calculate a Hamaker constant by the simple spectral method (SSM), it is necessary to obtain absorption frequency and function of oscillator strength in the ultraviolet region. In this report, these physical quantities were obtained by means of an optical method. As a result of examination of the relation between molecular structures and ultraviolet absorption frequencies, which were obtained from various liquid materials, it was concluded that ultraviolet absorption frequencies became smaller as electrons were delocalized. In particular, the absorption frequencies were found to be very small for CPS and polydihydrosilane due to sigma-conjugate of their electrons. The Hamaker constants of CPS and polydihydrosilane were successfully calculated based on the obtained absorption frequency and function of oscillator strength.
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Increased Fidelity in Prediction Methods For Landing Gear Noise
NASA Technical Reports Server (NTRS)
Lopes, Leonard V.; Brentner, Kenneth S.; Morris, Philip J.; Lockard, David P.
2006-01-01
An aeroacoustic prediction scheme has been developed for landing gear noise. The method is designed to handle the complex landing gear geometry of current and future aircraft. The gear is represented by a collection of subassemblies and simple components that are modeled using acoustic elements. These acoustic elements are generic, but generate noise representative of the physical components on a landing gear. The method sums the noise radiation from each component of the undercarriage in isolation accounting for interference with adjacent components through an estimate of the local upstream and downstream flows and turbulence intensities. The acoustic calculations are made in the code LGMAP, which computes the sound pressure levels at various observer locations. The method can calculate the noise from the undercarriage in isolation or installed on an aircraft for both main and nose landing gear. Comparisons with wind tunnel and flight data are used to initially calibrate the method, then it may be used to predict the noise of any landing gear. In this paper, noise predictions are compared with wind tunnel data for model landing gears of various scales and levels of fidelity, as well as with flight data on fullscale undercarriages. The present agreement between the calculations and measurements suggests the method has promise for future application in the prediction of airframe noise.
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-05-07
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient's body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. In most cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy.
Quantifying Environmental Effects on the Decay of Hole Transfer Couplings in Biosystems.
Ramos, Pablo; Pavanello, Michele
2014-06-10
In the past two decades, many research groups worldwide have tried to understand and categorize simple regimes in the charge transfer of such biological systems as DNA. Theoretically speaking, the lack of exact theories for electron-nuclear dynamics on one side and poor quality of the parameters needed by model Hamiltonians and nonadiabatic dynamics alike (such as couplings and site energies) on the other are the two main difficulties for an appropriate description of the charge transfer phenomena. In this work, we present an application of a previously benchmarked and linear-scaling subsystem density functional theory (DFT) method for the calculation of couplings, site energies, and superexchange decay factors (β) of several biological donor-acceptor dyads, as well as double stranded DNA oligomers composed of up to five base pairs. The calculations are all-electron and provide a clear view of the role of the environment on superexchange couplings in DNA-they follow experimental trends and confirm previous semiempirical calculations. The subsystem DFT method is proven to be an excellent tool for long-range, bridge-mediated coupling and site energy calculations of embedded molecular systems.
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-01-01
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient’s body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. Inmost cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy. PMID:20371908
Rasheed, Tabish; Ahmad, Shabbir
2010-10-01
Ab initio Hartree-Fock (HF), density functional theory (DFT) and second-order Møller-Plesset (MP2) methods were used to perform harmonic and anharmonic calculations for the biomolecule cytosine and its deuterated derivative. The anharmonic vibrational spectra were computed using the vibrational self-consistent field (VSCF) and correlation-corrected vibrational self-consistent field (CC-VSCF) methods. Calculated anharmonic frequencies have been compared with the argon matrix spectra reported in literature. The results were analyzed with focus on the properties of anharmonic couplings between pair of modes. A simple and easy to use formula for calculation of mode-mode coupling magnitudes has been derived. The key element in present approach is the approximation that only interactions between pairs of normal modes have been taken into account, while interactions of triples or more are neglected. FTIR and Raman spectra of solid state cytosine have been recorded in the regions 400-4000 cm(-1) and 60-4000 cm(-1), respectively. Vibrational analysis and assignments are based on calculated potential energy distribution (PED) values. Copyright 2010 Elsevier B.V. All rights reserved.
Power flows and Mechanical Intensities in structural finite element analysis
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1989-01-01
The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.
Troe, J; Ushakov, V G
2006-06-01
This work describes a simple method linking specific rate constants k(E,J) of bond fission reactions AB --> A + B with thermally averaged capture rate constants k(cap)(T) of the reverse barrierless combination reactions A + B --> AB (or the corresponding high-pressure dissociation or recombination rate constants k(infinity)(T)). Practical applications are given for ionic and neutral reaction systems. The method, in the first stage, requires a phase-space theoretical treatment with the most realistic minimum energy path potential available, either from reduced dimensionality ab initio or from model calculations of the potential, providing the centrifugal barriers E(0)(J). The effects of the anisotropy of the potential afterward are expressed in terms of specific and thermal rigidity factors f(rigid)(E,J) and f(rigid)(T), respectively. Simple relationships provide a link between f(rigid)(E,J) and f(rigid)(T) where J is an average value of J related to J(max)(E), i.e., the maximum J value compatible with E > or = E0(J), and f(rigid)(E,J) applies to the transitional modes. Methods for constructing f(rigid)(E,J) from f(rigid)(E,J) are also described. The derived relationships are adaptable and can be used on that level of information which is available either from more detailed theoretical calculations or from limited experimental information on specific or thermally averaged rate constants. The examples used for illustration are the systems C6H6+ <==> C6H5+ + H, C8H10+ --> C7H7+ + CH3, n-C9H12+ <==> C7H7+ + C2H5, n-C10H14+ <==> C7H7+ + C3H7, HO2 <==> H + O2, HO2 <==> HO + O, and H2O2 <==> 2HO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, E.B. Jr.
Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less
NASA Astrophysics Data System (ADS)
Kajikawa, Kazuhiro; Funaki, Kazuo
2011-12-01
Application of an external AC magnetic field parallel to superconducting tapes helps in eliminating the magnetization caused by the shielding current induced in the flat faces of the tapes. This method helps in realizing a magnet system with high-temperature superconducting tapes for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) applications. The effectiveness of the proposed method is validated by numerical calculations carried out using the finite-element method and experiments performed using a commercially available superconducting tape. The field uniformity for a single-layer solenoid coil after the application of an AC field is also estimated by a theoretical consideration.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko
2016-04-19
A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.
NASA Astrophysics Data System (ADS)
Brandelik, Andreas
2009-07-01
CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Canonical Representations of the Simple Map
NASA Astrophysics Data System (ADS)
Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima; Boozer, Allen
2007-11-01
The simple map is the simplest map that has the topology of a divertor tokamak. The simple map has three canonical representations: (i) toroidal flux and poloidal angle (ψ,θ) as canonical coordinates, (ii) the physical variables (R,Z) or (X,Y) as canonical coordinates, and (iii) the action-angle (J,ζ) or magnetic variables (ψ,θ) as canonical coordinates. We give the derivation of the simple map in the (X,Y) representation. The simple map in this representation has been studied extensively (Ref. 1 and references therein). We calculate the magnetic coordinates for the simple map, construct the simple map in magnetic coordinates, and calculate generic topological effects of magnetic perturbations in divertor tokamaks using the map. We also construct the simple map in (ψ,θ) representation. Preliminary results of these studies will be presented. This work is supported by US DOE OFES DE-FG02-01ER54624 and DE-FG02-04ER54793. [1] A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys Lett A 364 140--145 (2007).
Shan, Xiao; Clary, David C
2018-03-13
The rate constants of the two branches of H-abstractions from CH 3 OH by the H-atom and the corresponding reactions in the reverse direction are calculated using the one-dimensional semiclassical transition state theory (1D SCTST). In this method, only the reaction mode vibration of the transition state (TS) is treated anharmonically, while the remaining internal degrees of freedom are treated as they would have been in a standard TS theory calculation. A total of eight ab initio single-point energy calculations are performed in addition to the computational cost of a standard TS theory calculation. This allows a second-order Richardson extrapolation method to be employed to improve the numerical estimation of the third- and fourth-order derivatives, which in turn are used in the calculation of the anharmonic constant. Hindered-rotor (HR) vibrations are identified in the equilibrium states of CH 3 OH and CH 2 OH, and the TSs of the reactions. The partition function of the HRs are calculated using both a simple harmonic oscillator model and a more sophisticated one-dimensional torsional eigenvalue summation (1D TES) method. The 1D TES method can be easily adapted in 1D SCTST computation. The resulting 1D SCTST with 1D TES rate constants show good agreement to previous theoretical and experimental works. The effects of the HR on rate constants for different reactions are also investigated.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
Seike, Yasushi; Fukumori, Ryoko; Senga, Yukiko; Oka, Hiroki; Fujinaga, Kaoru; Okumura, Minoru
2004-01-01
A new and simple method for the determination of hydroxylamine in environmental water, such as fresh rivers and lakes using hypochlorite, followed by its gas choromatographic detection, has been developed. A glass vial filled with sample water was sealed by a butyl-rubber stopper and aluminum cap without head-space, and then sodium hypochlorite solution was injected into the vial through a syringe to convert hydroxylamine to nitrous oxide. The head-space in the glass vial was prepared with 99.9% grade N2 using a gas-tight syringe. After the glass vial was shaken for a few minutes, nitrous oxide in the gas-phase was measured by a gas chromatograph with an electron-capture detector. The dissolved nitrous oxide in the liquid-phase was calculated according to the solubility formula. The proposed method was applied to the analysis of fresh-water samples taken from Iu river and Hii river, flowing into brackish Lakes Nakaumi and Shinji, respectively.
Port, Johannes; Tao, Ziran; Junger, Annika; Joppek, Christoph; Tempel, Philipp; Husemann, Kim; Singer, Florian; Latzin, Philipp; Yammine, Sophie; Nagel, Joachim H; Kohlhäufl, Martin
2017-11-01
For the assessment of small airway diseases, a noninvasive double-tracer gas single-breath washout (DTG-SBW) with sulfur hexafluoride (SF 6 ) and helium (He) as tracer components has been proposed. It is assumed that small airway diseases may produce typical ventilation inhomogeneities which can be detected within one single tidal breath, when using two tracer components. Characteristic parameters calculated from a relative molar mass (MM) signal of the airflow during the washout expiration phase are analyzed. The DTG-SBW signal is acquired by subtracting a reconstructed MM signal without tracer gas from the signal measured with an ultrasonic sensor during in- and exhalation of the double-tracer gas for one tidal breath. In this paper, a simple method to determine the reconstructed MM signal is presented. Measurements on subjects with and without obstructive lung diseases including the small airways have shown high reliability and reproducibility of this method.
Yan, Jingjing; Huang, Xin; Liu, Shaopu; Yang, Jidong; Yuan, Yusheng; Duan, Ruilin; Zhang, Hui; Hu, Xiaoli
2016-01-01
A simple, rapid and effective method for auramine O (AO) detection was proposed by fluorescence and UV-Vis absorption spectroscopy. In the BR buffer system (pH 7.0), AO had a strong quenching ability to the fluorescence of bovin serum albumin (BSA) by dynamic quenching. In terms of the thermodynamic parameters calculated as ΔH > 0 and ΔS > 0, the resulting binding of BSA and AO was mainly attributed to the hydrophobic interaction forces. The linearity of this method was in the concentration range from 0.16 to 50 μmol L(-1) with a detection limit of 0.05 μmol L(-1). Based on fluorescence resonance energy transfer (FRET), the distance r (1.36 nm) between donor (BSA) and acceptor (AO) was obtained. Furthermore, the effects of foreign substances and ionic strength were evaluated under the optimum reaction conditions. BSA as a selective probe could be applied to the analysis of AO in medicines with satisfactory results.
Direct generation of an optical vortex beam in a single-frequency Nd:YVO4 laser.
Kim, D J; Kim, J W
2015-02-01
A simple method for generating a Laguerre-Gaussian (LG) mode optical vortex beam with well-determined handedness in a single-frequency solid state laser end-pumped by a ring-shaped pump beam is reported. After investigating the intensity profile and the wavefront helicity of each longitudinal mode output to understand generation of the LG mode in a Nd:YVO4 laser resonator, selection of the wavefront handedness has been achieved simply by inserting and tilting an etalon in the resonator, which breaks the propagation symmetry of the Poynting vectors with opposite helicity. Simple calculation and the experimental results are discussed for supporting this selection mechanism.
NASA Astrophysics Data System (ADS)
Tao, Jiangchuan; Zhao, Chunsheng; Kuang, Ye; Zhao, Gang; Shen, Chuanyang; Yu, Yingli; Bian, Yuxuan; Xu, Wanyun
2018-02-01
The number concentration of cloud condensation nuclei (CCN) plays a fundamental role in cloud physics. Instrumentations of direct measurements of CCN number concentration (NCCN) based on chamber technology are complex and costly; thus a simple way for measuring NCCN is needed. In this study, a new method for NCCN calculation based on measurements of a three-wavelength humidified nephelometer system is proposed. A three-wavelength humidified nephelometer system can measure the aerosol light-scattering coefficient (σsp) at three wavelengths and the light-scattering enhancement factor (fRH). The Ångström exponent (Å) inferred from σsp at three wavelengths provides information on mean predominate aerosol size, and hygroscopicity parameter (κ) can be calculated from the combination of fRH and Å. Given this, a lookup table that includes σsp, κ and Å is established to predict NCCN. Due to the precondition for the application, this new method is not suitable for externally mixed particles, large particles (e.g., dust and sea salt) or fresh aerosol particles. This method is validated with direct measurements of NCCN using a CCN counter on the North China Plain. Results show that relative deviations between calculated NCCN and measured NCCN are within 30 % and confirm the robustness of this method. This method enables simplerNCCN measurements because the humidified nephelometer system is easily operated and stable. Compared with the method using a CCN counter, another advantage of this newly proposed method is that it can obtain NCCN at lower supersaturations in the ambient atmosphere.
Portal dosimetry for VMAT using integrated images obtained during treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedford, James L., E-mail: James.Bedford@icr.ac.uk; Hanson, Ian M.; Hansen, Vibeke Nordmark
2014-02-15
Purpose: Portal dosimetry provides an accurate and convenient means of verifying dose delivered to the patient. A simple method for carrying out portal dosimetry for volumetric modulated arc therapy (VMAT) is described, together with phantom measurements demonstrating the validity of the approach. Methods: Portal images were predicted by projecting dose in the isocentric plane through to the portal image plane, with exponential attenuation and convolution with a double-Gaussian scatter function. Appropriate parameters for the projection were selected by fitting the calculation model to portal images measured on an iViewGT portal imager (Elekta AB, Stockholm, Sweden) for a variety of phantommore » thicknesses and field sizes. This model was then used to predict the portal image resulting from each control point of a VMAT arc. Finally, all these control point images were summed to predict the overall integrated portal image for the whole arc. The calculated and measured integrated portal images were compared for three lung and three esophagus plans delivered to a thorax phantom, and three prostate plans delivered to a homogeneous phantom, using a gamma index for 3% and 3 mm. A 0.6 cm{sup 3} ionization chamber was used to verify the planned isocentric dose. The sensitivity of this method to errors in monitor units, field shaping, gantry angle, and phantom position was also evaluated by means of computer simulations. Results: The calculation model for portal dose prediction was able to accurately compute the portal images due to simple square fields delivered to solid water phantoms. The integrated images of VMAT treatments delivered to phantoms were also correctly predicted by the method. The proportion of the images with a gamma index of less than unity was 93.7% ± 3.0% (1SD) and the difference between isocenter dose calculated by the planning system and measured by the ionization chamber was 0.8% ± 1.0%. The method was highly sensitive to errors in monitor units and field shape, but less sensitive to errors in gantry angle or phantom position. Conclusions: This method of predicting integrated portal images provides a convenient means of verifying dose delivered using VMAT, with minimal image acquisition and data processing requirements.« less
Shayesteh, Tavakol Heidari; Khajavi, Farzad; Khosroshahi, Abolfazl Ghafuri; Mahjub, Reza
2016-01-01
The determination of blood lead levels is the most useful indicator of the determination of the amount of lead that is absorbed by the human body. Various methods, like atomic absorption spectroscopy (AAS), have already been used for the detection of lead in biological fluid, but most of these methods are based on complicated, expensive, and highly instructed instruments. In this study, a simple and accurate spectroscopic method for the determination of lead has been developed and applied for the investigation of lead concentration in biological samples. In this study, a silica gel column was used to extract lead and eliminate interfering agents in human serum samples. The column was washed with deionized water. The pH was adjusted to the value of 8.2 using phosphate buffer, and then tartrate and cyanide solutions were added as masking agents. The lead content was extracted into the organic phase containing dithizone as a complexion reagent and the dithizone-Pb(II) complex was formed and approved by visible spectrophotometry at 538 nm. The recovery was found to be 84.6 %. In order to validate the method, a calibration curve involving the use of various concentration levels was calculated and proven to be linear in the range of 0.01-1.5 μg/ml, with an R (2) regression coefficient of 0.9968 by statistical analysis of linear model validation. The largest error % values were found to be -5.80 and +11.6 % for intra-day and inter-day measurements, respectively. The largest RSD % values were calculated to be 6.54 and 12.32 % for intra-day and inter-day measurements, respectively. Further, the limit of detection (LOD) was calculated to be 0.002 μg/ml. The developed method was applied to determine the lead content in the human serum of voluntary miners, and it has been proven that there is no statistically significant difference between the data provided from this novel method and the data obtained from previously studied AAS.
A Study on Group Key Agreement in Sensor Network Environments Using Two-Dimensional Arrays
Jang, Seung-Jae; Lee, Young-Gu; Lee, Kwang-Hyung; Kim, Tai-Hoon; Jun, Moon-Seog
2011-01-01
These days, with the emergence of the concept of ubiquitous computing, sensor networks that collect, analyze and process all the information through the sensors have become of huge interest. However, sensor network technology fundamentally has wireless communication infrastructure as its foundation and thus has security weakness and limitations such as low computing capacity, power supply limitations and price. In this paper, and considering the characteristics of the sensor network environment, we propose a group key agreement method using a keyset pre-distribution of two-dimension arrays that should minimize the exposure of key and personal information. The key collision problems are resolved by utilizing a polygonal shape’s center of gravity. The method shows that calculating a polygonal shape’s center of gravity only requires a very small amount of calculations from the users. The simple calculation not only increases the group key generation efficiency, but also enhances the sense of security by protecting information between nodes. PMID:22164072
NASA Astrophysics Data System (ADS)
Hummel, Sebastian; Bogner, Martin; Haub, Michael; Saegebarth, Joachim; Sandmaier, Hermann
2017-11-01
This paper presents a new simple analytical method to estimate the properties of falling droplets without solving complex differential equations. The derivation starts from the balance of forces and uses Newton’s second law and the equations of motion to calculate the volume of growing and detaching droplets and the time between two successive droplets falling out of a thin cylindrical capillary of borosilicate glass. In this specific case the reservoir is located above the capillary and the hydrostatic pressure of the fluid level leads to drop formation times about one second. In the second part of this paper experimental results are presented to validate the introduced calculation method. It is shown that the new approach describes the measuring results within a deviation of ±6.2%. The third part of the paper sums up the advantages of the new approach and an outlook is given on how the research on this topic will be continued.
View-limiting shrouds for insolation radiometers
NASA Technical Reports Server (NTRS)
Dennison, E. W.; Trentelman, G. F.
1985-01-01
Insolation radiometers (normal incidence pyrheliometers) are used to measure the solar radiation incident on solar concentrators for calibrating thermal power generation measurements. The measured insolation value is dependent on the atmospheric transparency, solar elevation angle, circumsolar radiation, and radiometer field of view. The radiant energy entering the thermal receiver is dependent on the same factors. The insolation value and the receiver input will be proportional if the concentrator and the radiometer have similar fields of view. This report describes one practical method for matching the field of view of a radiometer to that of a solar concentrator. The concentrator field of view can be calculated by optical ray tracing methods and the field of view of a radiometer with a simple shroud can be calculated by using geometric equations. The parameters for the shroud can be adjusted to provide an acceptable match between the respective fields of view. Concentrator fields of view have been calculated for a family of paraboloidal concentrators and receiver apertures. The corresponding shroud parameters have also been determined.
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
Models of convection-driven tectonic plates - A comparison of methods and results
NASA Technical Reports Server (NTRS)
King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.
1992-01-01
Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.
Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm
NASA Astrophysics Data System (ADS)
Yang, Pao-Keng
2011-09-01
We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.
Analysis of the Defect Structure of B2 Feal Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Ferrante, John; Noebe, Ronald D.; Amador, Carlos
1995-01-01
The Bozzolo, Ferrante and Smith (BFS) method for alloys is applied to the study of the defect structure of B2 FeAI alloys. First-principles Linear Muffin Tin Orbital calculations are used to determine the input parameters to the BFS method used in this work. The calculations successfully determine the phase field of the B2 structure, as well as the dependence with composition of the lattice parameter. Finally, the method is used to perform 'static' simulations where instead of determining the ground state configuration of the alloy with a certain concentration of vacancies, a large number of candidate ordered structures are studied and compared, in order to determine not only the lowest energy configurations but other possible metastable states as well. The results provide a description of the defect structure consistent with available experimental data. The simplicity of the BFS method also allows for a simple explanation of some of the essential features found in the concentration dependence of the heat of formation, lattice parameter and the defect structure.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Using the Climbing Drum Peel (CDP) Test to Obtain a G(sub IC) value for Core/Facesheet Bonds
NASA Technical Reports Server (NTRS)
Nettles, A. T.; Gregory, Elizabeth D.; Jackson, Justin R.
2006-01-01
A method of measuring the Mode I fracture toughness of core/facesheet bonds in sandwich Structures is desired, particularly with the widespread use of models that need this data as input. This study examined if a critical strain energy release rate, G(sub IC), can be obtained from the climbing drum peel (CDP) test. The CDP test is relatively simple to perform and does not rely on measuring small crack lengths such as required by the double cantilever beam (DCB) test. Simple energy methods were used to calculate G(sub IC) from CDP test data on composite facesheets bonded to a honeycomb core. Facesheet thicknesses from 2 to 5 plies were tested to examine the upper and lower bounds on facesheet thickness requirements. Results from the study suggest that the CDP test, with certain provisions, can be used to find the GIG value of a core/facesheet bond.
Entangled trajectories Hamiltonian dynamics for treating quantum nuclear effects
NASA Astrophysics Data System (ADS)
Smith, Brendan; Akimov, Alexey V.
2018-04-01
A simple and robust methodology, dubbed Entangled Trajectories Hamiltonian Dynamics (ETHD), is developed to capture quantum nuclear effects such as tunneling and zero-point energy through the coupling of multiple classical trajectories. The approach reformulates the classically mapped second-order Quantized Hamiltonian Dynamics (QHD-2) in terms of coupled classical trajectories. The method partially enforces the uncertainty principle and facilitates tunneling. The applicability of the method is demonstrated by studying the dynamics in symmetric double well and cubic metastable state potentials. The methodology is validated using exact quantum simulations and is compared to QHD-2. We illustrate its relationship to the rigorous Bohmian quantum potential approach, from which ETHD can be derived. Our simulations show a remarkable agreement of the ETHD calculation with the quantum results, suggesting that ETHD may be a simple and inexpensive way of including quantum nuclear effects in molecular dynamics simulations.
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Sayers, Adrian; Ben-Shlomo, Yoav; Blom, Ashley W; Steele, Fiona
2016-01-01
Abstract Studies involving the use of probabilistic record linkage are becoming increasingly common. However, the methods underpinning probabilistic record linkage are not widely taught or understood, and therefore these studies can appear to be a ‘black box’ research tool. In this article, we aim to describe the process of probabilistic record linkage through a simple exemplar. We first introduce the concept of deterministic linkage and contrast this with probabilistic linkage. We illustrate each step of the process using a simple exemplar and describe the data structure required to perform a probabilistic linkage. We describe the process of calculating and interpreting matched weights and how to convert matched weights into posterior probabilities of a match using Bayes theorem. We conclude this article with a brief discussion of some of the computational demands of record linkage, how you might assess the quality of your linkage algorithm, and how epidemiologists can maximize the value of their record-linked research using robust record linkage methods. PMID:26686842
Software for determining the true displacement of faults
NASA Astrophysics Data System (ADS)
Nieto-Fuentes, R.; Nieto-Samaniego, Á. F.; Xu, S.-S.; Alaniz-Álvarez, S. A.
2014-03-01
One of the most important parameters of faults is the true (or net) displacement, which is measured by restoring two originally adjacent points, called “piercing points”, to their original positions. This measurement is not typically applicable because it is rare to observe piercing points in natural outcrops. Much more common is the measurement of the apparent displacement of a marker. Methods to calculate the true displacement of faults using descriptive geometry, trigonometry or vector algebra are common in the literature, and most of them solve a specific situation from a large amount of possible combinations of the fault parameters. True displacements are not routinely calculated because it is a tedious and tiring task, despite their importance and the relatively simple methodology. We believe that the solution is to develop software capable of performing this work. In a previous publication, our research group proposed a method to calculate the true displacement of faults by solving most combinations of fault parameters using simple trigonometric equations. The purpose of this contribution is to present a computer program for calculating the true displacement of faults. The input data are the dip of the fault; the pitch angles of the markers, slickenlines and observation lines; and the marker separation. To prevent the common difficulties involved in switching between operative systems, the software is developed using the Java programing language. The computer program could be used as a tool in education and will also be useful for the calculation of the true fault displacement in geological and engineering works. The application resolves the cases with known direction of net slip, which commonly is assumed parallel to the slickenlines. This assumption is not always valid and must be used with caution, because the slickenlines are formed during a step of the incremental displacement on the fault surface, whereas the net slip is related to the finite slip.
The Productivity Dilemma in Workplace Health Promotion.
Cherniack, Martin
2015-01-01
Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.
A fast, parallel algorithm for distant-dependent calculation of crystal properties
NASA Astrophysics Data System (ADS)
Stein, Matthew
2017-12-01
A fast, parallel algorithm for distant-dependent calculation and simulation of crystal properties is presented along with speedup results and methods of application. An illustrative example is used to compute the Lennard-Jones lattice constants up to 32 significant figures for 4 ≤ p ≤ 30 in the simple cubic, face-centered cubic, body-centered cubic, hexagonal-close-pack, and diamond lattices. In most cases, the known precision of these constants is more than doubled, and in some cases, corrected from previously published figures. The tools and strategies to make this computation possible are detailed along with application to other potentials, including those that model defects.
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
Density Functional Theory Calculations of the Role of Defects in Amorphous Silicon Solar Cells
NASA Astrophysics Data System (ADS)
Johlin, Eric; Wagner, Lucas; Buonassisi, Tonio; Grossman, Jeffrey C.
2010-03-01
Amorphous silicon holds promise as a cheap and efficient material for thin-film photovoltaic devices. However, current device efficiencies are severely limited by the low mobility of holes in the bulk amorphous silicon material, the cause of which is not yet fully understood. This work employs a statistical analysis of density functional theory calculations to uncover the implications of a range of defects (including internal strain and substitution impurities) on the trapping and mobility of holes, and thereby also on the total conversion efficiency. We investigate the root causes of this low mobility and attempt to provide suggestions for simple methods of improving this property.
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
Simulation of Thermographic Responses of Delaminations in Composites with Quadrupole Method
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.; Cramer, K. Elliott
2016-01-01
The application of the quadrupole method for simulating thermal responses of delaminations in carbon fiber reinforced epoxy composites materials is presented. The method solves for the flux at the interface containing the delamination. From the interface flux, the temperature at the surface is calculated. While the results presented are for single sided measurements, with ash heating, expansion of the technique to arbitrary temporal flux heating or through transmission measurements is simple. The quadrupole method is shown to have two distinct advantages relative to finite element or finite difference techniques. First, it is straight forward to incorporate arbitrary shaped delaminations into the simulation. Second, the quadrupole method enables calculation of the thermal response at only the times of interest. This, combined with a significant reduction in the number of degrees of freedom for the same simulation quality, results in a reduction of the computation time by at least an order of magnitude. Therefore, it is a more viable technique for model based inversion of thermographic data. Results for simulations of delaminations in composites are presented and compared to measurements and finite element method results.
Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo
2012-12-01
(18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P < 0.001). The agreements of QUVP or QSPAM with visual grading were slight to fair for the caudate nucleus (κ = 0.421 and 0.291, respectively) and good to perfect to the putamen (κ = 0.663 and 0.607, respectively). Also, QSPAM and QUVP had a significant correlation with each other (P < 0.001). Cerebral atrophy made a significant difference in QSPAM and QUVP of the caudate nuclei regions with decreased (18)F-FP-CIT uptake. Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.
Proposal for a quantitative index of flood disasters.
Feng, Lihua; Luo, Gaoyuan
2010-07-01
Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.
Acoustics and dynamics of coaxial interacting vortex rings
NASA Technical Reports Server (NTRS)
Shariff, Karim; Leonard, Anthony; Zabusky, Norman J.; Ferziger, Joel H.
1988-01-01
Using a contour dynamics method for inviscid axisymmetric flow we examine the effects of core deformation on the dynamics and acoustic signatures of coaxial interacting vortex rings. Both 'passage' and 'collision' (head-on) interactions are studied for initially identical vortices. Good correspondence with experiments is obtained. A simple model which retains only the elliptic degree of freedom in the core shape is used to explain some of the calculated features.
Finitized conformal spectrum of the Ising model on the cylinder and torus
NASA Astrophysics Data System (ADS)
O'Brien, David L.; Pearce, Paul A.; Ole Warnaar, S.
1996-02-01
The spectrum of the critical Ising model on a lattice with cylindrical and toroidal boundary conditions is calculated by commuting transfer matrix methods. Using a simple truncation procedure, we obtain the natural finitizations of the conformal spectra recently proposed by Melzer. These finitizations imply polynomial identities which in the large lattice limit give rise to the Rogers-Ramanujan identities for the c = {1}/{2} Virasoro characters.
NASA Astrophysics Data System (ADS)
Weersink, Robert A.; Chaudhary, Sahil; Mayo, Kenwrick; He, Jie; Wilson, Brian C.
2017-04-01
We develop and demonstrate a simple shape-based approach for diffuse optical tomographic reconstruction of coagulative lesions generated during interstitial photothermal therapy (PTT) of the prostate. The shape-based reconstruction assumes a simple ellipsoid shape, matching the general dimensions of a cylindrical diffusing fiber used for light delivery in current clinical studies of PTT in focal prostate cancer. The specific requirement is to accurately define the border between the photothermal lesion and native tissue as the photothermal lesion grows, with an accuracy of ≤1 mm, so treatment can be terminated before there is damage to the rectal wall. To demonstrate the feasibility of the shape-based diffuse optical tomography reconstruction, simulated data were generated based on forward calculations in known geometries that include the prostate, rectum, and lesions of varying dimensions. The only source of optical contrast between the lesion and prostate was increased scattering in the lesion, as is typically observed with coagulation. With noise added to these forward calculations, lesion dimensions were reconstructed using the shape-based method. This approach for reconstruction is shown to be feasible and sufficiently accurate for lesions that are within 4 mm from the rectal wall. The method was also robust for irregularly shaped lesions.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Three-dimensional Monte Carlo calculation of atmospheric thermal heating rates
NASA Astrophysics Data System (ADS)
Klinger, Carolin; Mayer, Bernhard
2014-09-01
We present a fast Monte Carlo method for thermal heating and cooling rates in three-dimensional atmospheres. These heating/cooling rates are relevant particularly in broken cloud fields. We compare forward and backward photon tracing methods and present new variance reduction methods to speed up the calculations. For this application it turns out that backward tracing is in most cases superior to forward tracing. Since heating rates may be either calculated as the difference between emitted and absorbed power per volume or alternatively from the divergence of the net flux, both approaches have been tested. We found that the absorption/emission method is superior (with respect to computational time for a given uncertainty) if the optical thickness of the grid box under consideration is smaller than about 5 while the net flux divergence may be considerably faster for larger optical thickness. In particular, we describe the following three backward tracing methods: the first and most simple method (EMABS) is based on a random emission of photons in the grid box of interest and a simple backward tracing. Since only those photons which cross the grid box boundaries contribute to the heating rate, this approach behaves poorly for large optical thicknesses which are common in the thermal spectral range. For this reason, the second method (EMABS_OPT) uses a variance reduction technique to improve the distribution of the photons in a way that more photons are started close to the grid box edges and thus contribute to the result which reduces the uncertainty. The third method (DENET) uses the flux divergence approach where - in backward Monte Carlo - all photons contribute to the result, but in particular for small optical thickness the noise becomes large. The three methods have been implemented in MYSTIC (Monte Carlo code for the phYSically correct Tracing of photons In Cloudy atmospheres). All methods are shown to agree within the photon noise with each other and with a discrete ordinate code for a one-dimensional case. Finally a hybrid method is built using a combination of EMABS_OPT and DENET, and application examples are shown. It should be noted that for this application, only little improvement is gained by EMABS_OPT compared to EMABS.