Software development for teleroentgenogram analysis
NASA Astrophysics Data System (ADS)
Goshkoderov, A. A.; Khlebnikov, N. A.; Obabkov, I. N.; Serkov, K. V.; Gajniyarov, I. M.; Aliev, A. A.
2017-09-01
A framework for the analysis and calculation of teleroentgenograms was developed. Software development was carried out in the Department of Children's Dentistry and Orthodontics in Ural State Medical University. The software calculates the teleroentgenogram by the original method which was developed in this medical department. Program allows designing its own methods for calculating the teleroentgenograms by new methods. It is planned to use the technology of machine learning (Neural networks) in the software. This will help to make the process of calculating the teleroentgenograms easier because methodological points will be placed automatically.
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Integral method for the calculation of three-dimensional, laminar and turbulent boundary layers
NASA Technical Reports Server (NTRS)
Stock, H. W.
1978-01-01
The method for turbulent flows is a further development of an existing method; profile families with two parameters and a lag entrainment method replace the simple entrainment method and power profiles with one parameter. The method for laminar flows is a new development. Moment of momentum equations were used for the solution of the problem, the profile families were derived from similar solutions of boundary layer equations. Laminar and turbulent flows at the wings were calculated. The influence of wing tapering on the boundary layer development was shown. The turbulent boundary layer for a revolution ellipsoid is calculated for 0 deg and 10 deg incidence angles.
Solution of Cubic Equations by Iteration Methods on a Pocket Calculator
ERIC Educational Resources Information Center
Bamdad, Farzad
2004-01-01
A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.
Non-perturbative background field calculations
NASA Astrophysics Data System (ADS)
Stephens, C. R.
1988-01-01
New methods are developed for calculating one loop functional determinants in quantum field theory. Instead of relying on a calculation of all the eigenvalues of the small fluctuation equation, these techniques exploit the ability of the proper time formalism to reformulate an infinite dimensional field theoretic problem into a finite dimensional covariant quantum mechanical analog, thereby allowing powerful tools such as the method of Jacobi fields to be used advantageously in a field theory setting. More generally the methods developed herein should be extremely valuable when calculating quantum processes in non-constant background fields, offering a utilitarian alternative to the two standard methods of calculation—perturbation theory in the background field or taking the background field into account exactly. The formalism developed also allows for the approximate calculation of covariances of partial differential equations from a knowledge of the solutions of a homogeneous ordinary differential equation.
Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
USDA-ARS?s Scientific Manuscript database
Objective: To develop and evaluate a method for calculating the Healthy Eating Index-2005 (HEI-2005) with the widely used Nutrition Data System for Research (NDSR) based on the method developed for use with the US Department of Agriculture’s (USDA) Food and Nutrient Dietary Data System (FNDDS) and M...
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
Neutron skyshine calculations with the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-10-01
Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
Transport Test Problems for Hybrid Methods Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Separation behavior of boundary layers on three-dimensional wings
NASA Technical Reports Server (NTRS)
Stock, H. W.
1981-01-01
An inverse boundary layer procedure for calculating separated, turbulent boundary layers at infinitely long, crabbing wing was developed. The procedure was developed for calculating three dimensional, incompressible turbulent boundary layers was expanded to adiabatic, compressible flows. Example calculations with transsonic wings were made including viscose effects. In this case an approximated calculation method described for areas of separated, turbulent boundary layers, permitting calculation of this displacement thickness. The laminar boundary layer development was calculated with inclined ellipsoids.
NASA Astrophysics Data System (ADS)
Zolotorevskii, V. S.; Pozdnyakov, A. V.; Churyumov, A. Yu.
2012-11-01
A calculation-experimental study is carried out to improve the concept of searching for new alloying systems in order to develop new casting alloys using mathematical simulation methods in combination with thermodynamic calculations. The results show the high effectiveness of the applied methods. The real possibility of selecting the promising compositions with the required set of casting and mechanical properties is exemplified by alloys with thermally hardened Al-Cu and Al-Cu-Mg matrices, as well as poorly soluble additives that form eutectic components using mainly the calculation study methods and the minimum number of experiments.
Comparison of methods for developing the dynamics of rigid-body systems
NASA Technical Reports Server (NTRS)
Ju, M. S.; Mansour, J. M.
1989-01-01
Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.
Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.
Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik
2015-10-01
The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.
A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES
A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromagnetic properties of the model are symmetric with respect ...
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES
A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromag-netic properties of the model are symmetric with respect...
NASA Astrophysics Data System (ADS)
Fedin, M. A.; Kuvaldin, A. B.; Kuleshov, A. O.; Zhmurko, I. Y.; Akhmetyanov, S. V.
2018-01-01
Calculation methods for induction crucible furnaces with a conductive crucible have been reviewed and compared. The calculation method of electrical and energy characteristics of furnaces with a conductive crucible has been developed and the example of the calculation is shown below. The calculation results are compared with experimental data. Dependences of electrical and power characteristics of the furnace on frequency, inductor current, geometric dimensions and temperature have been obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeda, T.; Shimazu, Y.; Hibi, K.
2012-07-01
Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of thismore » project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)« less
Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, D.R.
1986-05-27
This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less
Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, C. M.; Rearden, B. T.
2013-07-01
Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)
Polyatomic molecular Dirac-Hartree-Fock calculations with Gaussian basis sets
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Faegri, Knut, Jr.; Taylor, Peter R.
1990-01-01
Numerical methods have been used successfully in atomic Dirac-Hartree-Fock (DHF) calculations for many years. Some DHF calculations using numerical methods have been done on diatomic molecules, but while these serve a useful purpose for calibration, the computational effort in extending this approach to polyatomic molecules is prohibitive. An alternative more in line with traditional quantum chemistry is to use an analytical basis set expansion of the wave function. This approach fell into disrepute in the early 1980's due to problems with variational collapse and intruder states, but has recently been put on firm theoretical foundations. In particular, the problems of variational collapse are well understood, and prescriptions for avoiding the most serious failures have been developed. Consequently, it is now possible to develop reliable molecular programs using basis set methods. This paper describes such a program and reports results of test calculations to demonstrate the convergence and stability of the method.
EuroFIR Guideline on calculation of nutrient content of foods for food business operators.
Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul
2018-01-01
This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.
Skeletal dosimetry based on µCT images of trabecular bone: update and comparisons
NASA Astrophysics Data System (ADS)
Kramer, R.; Cassola, V. F.; Vieira, J. W.; Khoury, H. J.; de Oliveira Lira, C. A. B.; Robson Brown, K.
2012-06-01
Two skeletal dosimetry methods using µCT images of human bone have recently been developed: the paired-image radiation transport (PIRT) model introduced by researchers at the University of Florida (UF) in the US and the systematic-periodic cluster (SPC) method developed by researchers at the Federal University of Pernambuco in Brazil. Both methods use µCT images of trabecular bone (TB) to model spongiosa regions of human bones containing marrow cavities segmented into soft tissue volumes of active marrow (AM), trabecular inactive marrow and the bone endosteum (BE), which is a 50 µm thick layer of marrow on all TB surfaces and on cortical bone surfaces next to TB as well as inside the medullary cavities. With respect to the radiation absorbed dose, the AM and the BE are sensitive soft tissues for the induction of leukaemia and bone cancer, respectively. The two methods differ mainly with respect to the number of bone sites and the size of the µCT images used in Monte Carlo calculations and they apply different methods to simulate exposure from radiation sources located outside the skeleton. The PIRT method calculates dosimetric quantities in isolated human bones while the SPC method uses human bones embedded in the body of a phantom which contains all relevant organs and soft tissues. Consequently, the SPC method calculates absorbed dose to the AM and to the BE from particles emitted by radionuclides concentrated in organs or from radiation sources located outside the human body in one calculation step. In order to allow for similar calculations of AM and BE absorbed doses using the PIRT method, the so-called dose response functions (DRFs) have been developed based on absorbed fractions (AFs) of energy for electrons isotropically emitted in skeletal tissues. The DRFs can be used to transform the photon fluence in homogeneous spongiosa regions into absorbed dose to AM and BE. This paper will compare AM and BE AFs of energy from electrons emitted in skeletal tissues calculated with the SPC and the PIRT method and AM and BE absorbed doses and AFs calculated with PIRT-based DRFs and with the SPC method. The results calculated with the two skeletal dosimetry methods agree well if one takes the differences between the two models properly into account. Additionally, the SPC method will be updated with larger µCT images of TB.
Improvement of calculation method for electrical parameters of short network of ore-thermal furnaces
NASA Astrophysics Data System (ADS)
Aliferov, A. I.; Bikeev, R. A.; Goreva, L. P.
2017-10-01
The paper describes a new calculation method for active and inductive resistance of split interleaved current leads packages in ore-thermal electric furnaces. The method is developed on basis of regression analysis of dependencies of active and inductive resistances of the packages on their geometrical parameters, mutual disposition and interleaving pattern. These multi-parametric calculations have been performed with ANSYS software. The proposed method allows solving split current lead electrical parameters minimization and balancing problems for ore-thermal furnaces.
A method of calculating the ultimate strength of continuous beams
NASA Technical Reports Server (NTRS)
Newlin, J A; Trayer, George W
1931-01-01
The purpose of this study was to investigate the strength of continuous beams after the elastic limit has been passed. As a result, a method of calculation, which is applicable to maximum load conditions, has been developed. The method is simpler than the methods now in use and it applies properly to conditions where the present methods fail to apply.
Research on Streamlines and Aerodynamic Heating for Unstructured Grids on High-Speed Vehicles
NASA Technical Reports Server (NTRS)
DeJarnette, Fred R.; Hamilton, H. Harris (Technical Monitor)
2001-01-01
Engineering codes are needed which can calculate convective heating rates accurately and expeditiously on the surfaces of high-speed vehicles. One code which has proven to meet these needs is the Langley Approximate Three-Dimensional Convective Heating (LATCH) code. It uses the axisymmetric analogue in an integral boundary-layer method to calculate laminar and turbulent heating rates along inviscid surface streamlines. It requires the solution of the inviscid flow field to provide the surface properties needed to calculate the streamlines and streamline metrics. The LATCH code has been used with inviscid codes which calculated the flow field on structured grids, Several more recent inviscid codes calculate flow field properties on unstructured grids. The present research develops a method to calculate inviscid surface streamlines, the streamline metrics, and heating rates using the properties calculated from inviscid flow fields on unstructured grids. Mr. Chris Riley, prior to his departure from NASA LaRC, developed a preliminary code in the C language, called "UNLATCH", to accomplish these goals. No publication was made on his research. The present research extends and improves on the code developed by Riley. Particular attention is devoted to the stagnation region, and the method is intended for programming in the FORTRAN 90 language.
NASA Technical Reports Server (NTRS)
Pesetskaya, N. N.; Timofeev, I. YA.; Shipilov, S. D.
1988-01-01
In recent years much attention has been given to the development of methods and programs for the calculation of the aerodynamic characteristics of multiblade, saber-shaped air propellers. Most existing methods are based on the theory of lifting lines. Elsewhere, the theory of a lifting surface is used to calculate screw and lifting propellers. In this work, methods of discrete eddies are described for the calculation of the aerodynamic characteristics of propellers using the linear and nonlinear theories of lifting surfaces.
Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger
NASA Astrophysics Data System (ADS)
Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang
2017-12-01
This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.
Frequency-domain multiscale quantum mechanics/electromagnetics simulation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung, E-mail: yamcy@yangtze.hku.hk, E-mail: ghc@everest.hku.hk
A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 1190–1199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate themore » information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.« less
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Ab initio quantum chemical calculation of electron transfer matrix elements for large molecules
NASA Astrophysics Data System (ADS)
Zhang, Linda Yu; Friesner, Richard A.; Murphy, Robert B.
1997-07-01
Using a diabatic state formalism and pseudospectral numerical methods, we have developed an efficient ab initio quantum chemical approach to the calculation of electron transfer matrix elements for large molecules. The theory is developed at the Hartree-Fock level and validated by comparison with results in the literature for small systems. As an example of the power of the method, we calculate the electronic coupling between two bacteriochlorophyll molecules in various intermolecular geometries. Only a single self-consistent field (SCF) calculation on each of the monomers is needed to generate coupling matrix elements for all of the molecular pairs. The largest calculations performed, utilizing 1778 basis functions, required ˜14 h on an IBM 390 workstation. This is considerably less cpu time than would be necessitated with a supermolecule adiabatic state calculation and a conventional electronic structure code.
Solution of plane cascade flow using improved surface singularity methods
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method was developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Theoretical development and first-principles analysis of strongly correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chen
A variety of quantum many-body methods have been developed for studying the strongly correlated electron systems. We have also proposed a computationally efficient and accurate approach, named the correlation matrix renormalization (CMR) method, to address the challenges. The initial implementation of the CMR method is designed for molecules which have theoretical advantages, including small size of system, manifest mechanism and strongly correlation effect such as bond breaking process. The theoretic development and benchmark tests of the CMR method are included in this thesis. Meanwhile, ground state total energy is the most important property of electronic calculations. We also investigated anmore » alternative approach to calculate the total energy, and extended this method for magnetic anisotropy energy (MAE) of ferromagnetic materials. In addition, another theoretical tool, dynamical mean- field theory (DMFT) on top of the DFT , has also been used in electronic structure calculations for an Iridium oxide to study the phase transition, which results from an interplay of the d electrons' internal degrees of freedom.« less
Prediction of Quality Change During Thawing of Frozen Tuna Meat by Numerical Calculation I
NASA Astrophysics Data System (ADS)
Murakami, Natsumi; Watanabe, Manabu; Suzuki, Toru
A numerical calculation method has been developed to determine the optimum thawing method for minimizing the increase of metmyoglobin content (metMb%) as an indicator of color changes in frozen tuna meat during thawing. The calculation method is configured the following two steps: a) calculation of temperature history in each part of frozen tuna meat during thawing by control volume method under the assumption of one-dimensional heat transfer, and b) calculation of metMb% based on the combination of calculated temperature history, Arrenius equation and the first-order reaction equation for the increase rate of metMb%. Thawing experiments for measuring temperature history of frozen tuna meat were carried out under the conditions of rapid thawing and slow thawing to compare the experimental data with calculated temperature history as well as the increase of metMb%. The results were coincident with the experimental data. The proposed simulation method would be useful for predicting the optimum thawing conditions in terms of metMb%.
A Study on Multi-Swing Stability Analysis of Power System using Damping Rate Inversion
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Morii, Yuki; Oyama, Tsutomu; Hashiguchi, Takuhei; Goda, Tadahiro; Nomiyama, Fumitoshi; Kosugi, Narifumi
In recent years, much attention is paid to the nonlinear analysis method in the field of stability analysis of power systems. Especially for the multi-swing stability analysis, the unstable limit cycle has an important meaning as a stability margin. It is required to develop a high speed calculation method of stability boundary regarding multi-swing stability because the real-time calculation of ATC is necessary to realize the flexible wheeling trades. Therefore, the authors have developed a new method which can calculate the unstable limit cycle based on damping rate inversion method. Using the unstable limit cycle, it is possible to predict the multi-swing stability at the time when the fault transmission line is reclosed. The proposed method is tested in Lorenz equation, single-machine infinite-bus system model and IEEJ WEST10 system model.
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
Conjugate-gradient optimization method for orbital-free density functional calculations.
Jiang, Hong; Yang, Weitao
2004-08-01
Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.
Progress in unstructured-grid methods development for unsteady aerodynamic applications
NASA Technical Reports Server (NTRS)
Batina, John T.
1992-01-01
The development of unstructured-grid methods for the solution of the equations of fluid flow and what was learned over the course of the research are summarized. The focus of the discussion is on the solution of the time-dependent Euler equations including spatial discretizations, temporal discretizations, and boundary conditions. An example calculation with an implicit upwind method using a CFL number of infinity is presented for the Boeing 747 aircraft. The results were obtained in less than one hour CPU time on a Cray-2 computer, thus, demonstrating the speed and robustness of the capability. Additional calculations for the ONERA M6 wing demonstrate the accuracy of the method through the good agreement between calculated results and experimental data for a standard transonic flow case.
Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.
Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste
2009-12-21
The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.
Emergy Algebra: Improving Matrix Methods for Calculating Tranformities
Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy method...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
A Hybrid Numerical Method for Turbulent Mixing Layers. Degree awarded by Case Western Reserve Univ.
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.
2001-01-01
A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modern day aircraft and also those of hypersonic vehicles currently under development. The method configurations in which a dominant structural feature provides an unsteady mechanism to drive the turbulent development in the mixing layer. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS-LES method on stretched, non-Cartesian grids. Closure for the RANS equations was obtained using the Cebeci-Smith algebraic turbulence model in conjunction with the wall-function approach of Ota and Goldberg. The wall-function approach enabled a continuous computational grid from the RANS regions to the LES region. The LES equations were closed using the Smagorinsky subgrid scale model. The hybrid RANS-LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Vortex shedding from the base region of a splitter plate separating the upstream flows was observed to eventually transition to turbulence. The location of the transition, however, was much further downstream than indicated by experiments. Actual LES calculations, performed in three spatial directions, also indicated vortex shedding, but the transition to turbulence was found to occur much closer to the beginning of the mixing section. which is in agreement with experimental observations. These calculations demonstrated that LES simulations must be performed in three dimensions. Comparisons of time-averaged axial velocities and turbulence intensities indicated reasonable agreement with experimental data.
THE ONSITE ON-LINE CALCULATORS AND TRAINING FOR SUBSURFACE CONTAMINANT TRANSPORT SITE ASSESSMENT
EPA has developed a suite of on-line calculators called "OnSite" for assessing transport of environmental contaminants in the subsurface. The purpose of these calculators is to provide methods and data for common calculations used in assessing impacts from subsurface contaminatio...
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J.; Moser, A.
1975-01-01
A very general method for calculating compressible three-dimensional laminar and turbulent boundary layers on arbitrary wings is described. The method utilizes a nonorthogonal coordinate system for the boundary-layer calculations and includes a geometry package that represents the wing analytically. In the calculations all the geometric parameters of the coordinate system are accounted for. The Reynolds shear-stress terms are modeled by an eddy-viscosity formulation developed by Cebeci. The governing equations are solved by a very efficient two-point finite-difference method used earlier by Keller and Cebeci for two-dimensional flows and later by Cebeci for three-dimensional flows.
NASA Technical Reports Server (NTRS)
Sanger, Eugen
1932-01-01
In the present report the computation is actually carried through for the case of parallel spars of equal resistance in bending without direct loading, including plotting of the influence lines; for other cases the method of calculation is explained. The development of large size airplanes can be speeded up by accurate methods of calculation such as this.
The article reports the development of a new method of calculating electrical conditions in wire-duct electrostatic precipitation devices. The method, based on a numerical solution to the governing differential equations under a suitable choice of boundary conditions, accounts fo...
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1989-01-01
In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.
Zou, Cheng; Sun, Zhenguo; Cai, Dong; Muhammad, Salman; Zhang, Wenzeng; Chen, Qiang
2016-01-01
A method is developed to accurately determine the spatial impulse response at the specifically discretized observation points in the radiated field of 1-D linear ultrasonic phased array transducers with great efficiency. In contrast, the previously adopted solutions only optimize the calculation procedure for a single rectangular transducer and required approximation considerations or nonlinear calculation. In this research, an algorithm that follows an alternative approach to expedite the calculation of the spatial impulse response of a rectangular linear array is presented. The key assumption for this algorithm is that the transducer apertures are identical and linearly distributed on an infinite rigid plane baffled with the same pitch. Two points in the observation field, which have the same position relative to two transducer apertures, share the same spatial impulse response that contributed from corresponding transducer, respectively. The observation field is discretized specifically to meet the relationship of equality. The analytical expressions of the proposed algorithm, based on the specific selection of the observation points, are derived to remove redundant calculations. In order to measure the proposed methodology, the simulation results obtained from the proposed method and the classical summation method are compared. The outcomes demonstrate that the proposed strategy can speed up the calculation procedure since it accelerates the speed-up ratio which relies upon the number of discrete points and the number of the array transducers. This development will be valuable in the development of advanced and faster linear ultrasonic phased array systems. PMID:27834799
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
NASA Astrophysics Data System (ADS)
Miro, M.; Famiglietti, J. S.
2016-12-01
In California, traditional water management has focused heavily on surface water, leaving many basins in a state of critical overdraft and lacking in established frameworks for groundwater management. However, new groundwater legislation, the 2014 Sustainable Groundwater Management Act (SGMA), presents an important opportunity for water managers and hydrologists to develop novel methods for managing statewide groundwater resources. Integrating scientific advances in groundwater monitoring with hydrologically-sound methods can go a long way in creating a system that can better govern the resource. SGMA mandates that groundwater management agencies employ the concept of sustainable yield as their primary management goal but does not clearly define a method to calculate it. This study will develop a hydrologically-based method to quantify sustainable yield that follows the threshold framework under SGMA. Using this method, sustainable yield will be calculated for two critically-overdrafted groundwater basins in California's Central Valley. This measure will also utilize groundwater monitoring data and downscaled remote sensing estimates of groundwater storage change from NASA's GRACE satellite to illustrate why data matters for successful management. This method can be used as a basis for the development of SGMA's groundwater management plans (GSPs) throughout California.
NASA Astrophysics Data System (ADS)
Pokhmurska, H.; Maksymovych, O.; Dzyubyk, A.; Dzyubyk, L.
2018-06-01
The methods of calculating the trajectories and the rate of growth of curvilinear fatigue cracks in isotropic and composite plate structure elements during cyclic loading along straight or curvilinear trajectories are developed. For isotropic and anisotropic materials, the methodes are developed on the basis of the force criterion of destruction with the additional application of the fatigue fracture diagrams. To find the change in the shape of the cracks in the loading process, the step-by-step method was used. At each stage, the direction of the growth of all vertices of cracks and the lengths of their arcs was found on the basis of determining the intensity coefficients of stresses by the method of singular integral equations. The results of calculations of the cracks system growth process are presented.
Development of a Hybrid RANS/LES Method for Turbulent Mixing Layers
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
Significant research has been underway for several years in NASA Glenn Research Center's nozzle branch to develop advanced computational methods for simulating turbulent flows in exhaust nozzles. The primary efforts of this research have concentrated on improving our ability to calculate the turbulent mixing layers that dominate flows both in the exhaust systems of modern-day aircraft and in those of hypersonic vehicles under development. As part of these efforts, a hybrid numerical method was recently developed to simulate such turbulent mixing layers. The method developed here is intended for configurations in which a dominant structural feature provides an unsteady mechanism to drive the turbulent development in the mixing layer. Interest in Large Eddy Simulation (LES) methods have increased in recent years, but applying an LES method to calculate the wide range of turbulent scales from small eddies in the wall-bounded regions to large eddies in the mixing region is not yet possible with current computers. As a result, the hybrid method developed here uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall-bounded regions entering a mixing section and uses a LES procedure to calculate the mixing-dominated regions. A numerical technique was developed to enable the use of the hybrid RANS-LES method on stretched, non-Cartesian grids. With this technique, closure for the RANS equations is obtained by using the Cebeci-Smith algebraic turbulence model in conjunction with the wall-function approach of Ota and Goldberg. The LES equations are closed using the Smagorinsky subgrid scale model. Although the function of the Cebeci-Smith model to replace all of the turbulent stresses is quite different from that of the Smagorinsky subgrid model, which only replaces the small subgrid turbulent stresses, both are eddy viscosity models and both are derived at least in part from mixing-length theory. The similar formulation of these two models enables the RANS and LES equations to be solved with a single solution scheme and computational grid. The hybrid RANS-LES method has been applied to a benchmark compressible mixing layer experiment in which two isolated supersonic streams, separated by a splitter plate, provide the flows to a constant-area mixing section. Although the configuration is largely two dimensional in nature, three-dimensional calculations were found to be necessary to enable disturbances to develop in three spatial directions and to transition to turbulence. The flow in the initial part of the mixing section consists of a periodic vortex shedding downstream of the splitter plate trailing edge. This organized vortex shedding then rapidly transitions to a turbulent structure, which is very similar to the flow development observed in the experiments. Although the qualitative nature of the large-scale turbulent development in the entire mixing section is captured well by the LES part of the current hybrid method, further efforts are planned to directly calculate a greater portion of the turbulence spectrum and to limit the subgrid scale modeling to only the very small scales. This will be accomplished by the use of higher accuracy solution schemes and more powerful computers, measured both in speed and memory capabilities.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
Neutron skyshine calculations for the PDX tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, F.J.; Nigg, D.W.
1979-01-01
The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less
Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon
2015-02-03
Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.
Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang
2018-01-01
The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.
Zhao, Xin; Liu, Jun; Yao, Yong-Xin; ...
2018-01-23
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin; Liu, Jun; Yao, Yong-Xin
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
24 CFR Appendix II to Subpart C of... - Development of Standards; Calculation Methods
Code of Federal Regulations, 2012 CFR
2012-04-01
...; Calculation Methods I. Background Information Concerning the Standards (a) Thermal Radiation: (1) Introduction... and structures in the event of fire. The resulting fireball emits thermal radiation which is absorbed... radiation being emitted. The radiation can cause severe burn, injuries and even death to exposed persons...
Mutual influence of molecular diffusion in gas and surface phases
NASA Astrophysics Data System (ADS)
Hori, Takuma; Kamino, Takafumi; Yoshimoto, Yuta; Takagi, Shu; Kinefuchi, Ikuya
2018-01-01
We develop molecular transport simulation methods that simultaneously deal with gas- and surface-phase diffusions to determine the effect of surface diffusion on the overall diffusion coefficients. The phenomenon of surface diffusion is incorporated into the test particle method and the mean square displacement method, which are typically employed only for gas-phase transport. It is found that for a simple cylindrical pore, the diffusion coefficients in the presence of surface diffusion calculated by these two methods show good agreement. We also confirm that both methods reproduce the analytical solution. Then, the diffusion coefficients for ink-bottle-shaped pores are calculated using the developed method. Our results show that surface diffusion assists molecular transport in the gas phase. Moreover, the surface tortuosity factor, which is known to be uniquely determined by physical structure, is influenced by the presence of gas-phase diffusion. This mutual influence of gas-phase diffusion and surface diffusion indicates that their simultaneous calculation is necessary for an accurate evaluation of the diffusion coefficients.
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Steponas Kolupaila's contribution to hydrological science development
NASA Astrophysics Data System (ADS)
Valiuškevičius, Gintaras
2017-08-01
Steponas Kolupaila (1892-1964) was an important figure in 20th century hydrology and one of the pioneers of scientific water gauging in Europe. His research on the reliability of hydrological data and measurement methods was particularly important and contributed to the development of empirical hydrological calculation methods. Kolupaila was one of the first who standardised water-gauging methods internationally. He created several original hydrological and hydraulic calculation methods (his discharge assessment method for winter period was particularly significant). His innate abilities and frequent travel made Kolupaila a universal specialist in various fields and an active public figure. He revealed his multilayered scientific and cultural experiences in his most famous book, Bibliography of Hydrometry. This book introduced the unique European hydrological-measurement and computation methods to the community of world hydrologists at that time and allowed the development and adaptation of these methods across the world.
NASA Technical Reports Server (NTRS)
Jefferys, W. H.
1981-01-01
A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Dai, Peng; Jiang, Nan; Tan, Ren-Xiang
2016-01-01
Elucidation of absolute configuration of chiral molecules including structurally complex natural products remains a challenging problem in organic chemistry. A reliable method for assigning the absolute stereostructure is to combine the experimental circular dichroism (CD) techniques such as electronic and vibrational CD (ECD and VCD), with quantum mechanics (QM) ECD and VCD calculations. The traditional QM methods as well as their continuing developments make them more applicable with accuracy. Taking some chiral natural products with diverse conformations as examples, this review describes the basic concepts and new developments of QM approaches for ECD and VCD calculations in solution and solid states.
NASA Technical Reports Server (NTRS)
Chaney, William S.
1961-01-01
A theoretical study has been made of molybdenum dioxide and molybdenum trioxide in order to extend the knowledge of factors Involved in the oxidation of molybdenum. New methods were developed for calculating the lattice energies based on electrostatic valence theory, and the coulombic, polarization, Van der Waals, and repulsion energie's were calculated. The crystal structure was examined and structure details were correlated with lattice energy.
NASA Astrophysics Data System (ADS)
Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin
2017-02-01
Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N
Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less
Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison
2015-01-01
I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.
Aljasser, Faisal; Vitevitch, Michael S
2018-02-01
A number of databases (Storkel Behavior Research Methods, 45, 1159-1167, 2013) and online calculators (Vitevitch & Luce Behavior Research Methods, Instruments, and Computers, 36, 481-487, 2004) have been developed to provide statistical information about various aspects of language, and these have proven to be invaluable assets to researchers, clinicians, and instructors in the language sciences. The number of such resources for English is quite large and continues to grow, whereas the number of such resources for other languages is much smaller. This article describes the development of a Web-based interface to calculate phonotactic probability in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at http://phonotactic.drupal.ku.edu/ .
Automated Routines for Calculating Whole-Stream Metabolism: Theoretical Background and User's Guide
Bales, Jerad D.; Nardi, Mark R.
2007-01-01
In order to standardize methods and facilitate rapid calculation and archival of stream-metabolism variables, the Stream Metabolism Program was developed to calculate gross primary production, net ecosystem production, respiration, and selected other variables from continuous measurements of dissolved-oxygen concentration, water temperature, and other user-supplied information. Methods for calculating metabolism from continuous measurements of dissolved-oxygen concentration and water temperature are fairly well known, but a standard set of procedures and computation software for all aspects of the calculations were not available previously. The Stream Metabolism Program addresses this deficiency with a stand-alone executable computer program written in Visual Basic.NET?, which runs in the Microsoft Windows? environment. All equations and assumptions used in the development of the software are documented in this report. Detailed guidance on application of the software is presented, along with a summary of the data required to use the software. Data from either a single station or paired (upstream, downstream) stations can be used with the software to calculate metabolism variables.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qu, Weilu; Qiu, Weiting
2018-03-01
In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.
Zone plate method for electronic holographic display using resolution redistribution technique.
Takaki, Yasuhiro; Nakamura, Junya
2011-07-18
The resolution redistribution (RR) technique can increase the horizontal viewing-zone angle and screen size of electronic holographic display. The present study developed a zone plate method that would reduce hologram calculation time for the RR technique. This method enables calculation of an image displayed on a spatial light modulator by performing additions of the zone plates, while the previous calculation method required performing the Fourier transform twice. The derivation and modeling of the zone plate are shown. In addition, the look-up table approach was introduced for further reduction in computation time. Experimental verification using a holographic display module based on the RR technique is presented.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
Real-Time Stability Margin Measurements for X-38 Robustness Analysis
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Stachowiak, Susan J.
2005-01-01
A method has been developed for real-time stability margin measurement calculations. The method relies on a tailored-forced excitation targeted to a specific frequency range. Computation of the frequency response is matched to the specific frequencies contained in the excitation. A recursive Fourier transformation is used to make the method compatible with real-time calculation. The method was incorporated into the X-38 nonlinear simulation and applied to an X-38 robustness test. X-38 stability margins were calculated for different variations in aerodynamic and mass properties over the vehicle flight trajectory. The new method showed results comparable to more traditional stability analysis techniques, and at the same time, this new method provided coverage that is more complete and increased efficiency.
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Simplified methods for calculating photodissociation rates
NASA Technical Reports Server (NTRS)
Shimazaki, T.; Ogawa, T.; Farrell, B. C.
1977-01-01
Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
Design, analysis and test verification of advanced encapsulation systems
NASA Technical Reports Server (NTRS)
Garcia, A., III
1984-01-01
Investigations into transparent conductive polymers were begun. Polypyrrole was electrochemically deposited, but the film characteristics were poor. A proprietary polymer material supplied by Polaroid was evaluated and showed promise as a readily processable material. A method was developed for calculating the magnitude and location of the maximum electric field for the family of solar-cell-like shapes. A method for calculating the lines of force for three dimensional electric fields was developed and applied to a geometry of interest to the photovoltaic program.
NASA Astrophysics Data System (ADS)
Besemer, Abigail E.
Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre-clinical and clinical patients and large dosimetric differences resulted when using conventional organ-level methods and the patient-specific voxelized methods described in this work. The dosimetric impact of various steps in the 3D voxelized dosimetry process were evaluated including quantitative imaging acquisition, image coregistration, voxel resampling, ROI contouring, CT-based material segmentation, and pharmacokinetic fitting. Finally, a multi-objective treatment planning optimization framework was developed for multi-radiopharmaceutical combination therapies.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique
Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie
2015-01-01
Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510
Next Generation Nuclear Plant Methods Research and Development Technical Program Plan -- PLN-2498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg
2008-09-01
One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope ofmore » the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.« less
Localized-overlap approach to calculations of intermolecular interactions
NASA Astrophysics Data System (ADS)
Rob, Fazle
Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.
Development and Application of a Parallel LCAO Cluster Method
NASA Astrophysics Data System (ADS)
Patton, David C.
1997-08-01
CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.
Improved and standardized method for assessing years lived with disability after injury
Polinder, S; Lyons, RA; Lund, J; Ditsuwan, V; Prinsloo, M; Veerman, JL; van Beeck, EF
2012-01-01
Abstract Objective To develop a standardized method for calculating years lived with disability (YLD) after injury. Methods The method developed consists of obtaining data on injury cases seen in emergency departments as well as injury-related hospital admissions, using the EUROCOST system to link the injury cases to disability information and employing empirical data to describe functional outcomes in injured patients. Findings Overall, 87 weights and proportions for 27 injury diagnoses involving lifelong consequences were included in the method. Almost all of the injuries investigated (96–100%) could be assigned to EUROCOST categories. The mean number of YLD per case of injury varied with the country studied. Use of the novel method resulted in estimated burdens of injury that were 3 to 8 times higher, in terms of YLD, than the corresponding estimates produced using the conventional methods employed in global burden of disease studies, which employ disability-adjusted life years. Conclusion The novel method for calculating YLD after injury can be applied in different settings, overcomes some limitations of the method used to calculate the global burden of disease, and allows more accurate estimates of the population burden of injury. PMID:22807597
Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaal, H.; Bernnat, W.
1987-10-01
For calculations of high-temperature gas-cooled reactors with low-enrichment fuel, it is important to know the plutonium cross sections accurately. Therefore, a calculational method was developed, by which the plutonium cross-section data of the ENDF/B-IV library can be examined. This method uses zero- and one-dimensional neutron transport calculations to collapse the basic data into one-group cross sections, which then can be compared with experimental values obtained from integral tests. For comparison the data from the critical experiment CESAR-II of the Centre d'Etudes Nucleaires, Cadarache, France, were utilized.
What Is Professional Development Worth? Calculating the Value of Onboarding Programs in Extension
ERIC Educational Resources Information Center
Harder, Amy; Hodges, Alan; Zelaya, Priscilla
2017-01-01
Return on investment (ROI) is a commonly used metric for organizations concerned with demonstrating the value of their investments; it can be used to determine whether funds spent providing professional development programs for Extension professionals are good investments. This article presents a method for calculating ROI for an onboarding…
Quantitative assessment of landslide risk in design practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, A.M.; Darevskii, V.E.
1995-03-01
Developments of the State Institute for River Transport Protection, which are directed toward practical implementation of an engineering method recommended by regulatory documents for calculation of landslide phenomena, are cited; the potential of operating computer software is demonstrated. Results of calculations are compared with test data, and also with problems solved in the new developments.
Cascade flutter analysis with transient response aerodynamics
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Mahajan, Aparajit J.; Keith, Theo G., Jr.; Stefko, George L.
1991-01-01
Two methods for calculating linear frequency domain aerodynamic coefficients from a time marching Full Potential cascade solver are developed and verified. In the first method, the Influence Coefficient, solutions to elemental problems are superposed to obtain the solutions for a cascade in which all blades are vibrating with a constant interblade phase angle. The elemental problem consists of a single blade in the cascade oscillating while the other blades remain stationary. In the second method, the Pulse Response, the response to the transient motion of a blade is used to calculate influence coefficients. This is done by calculating the Fourier Transforms of the blade motion and the response. Both methods are validated by comparison with the Harmonic Oscillation method and give accurate results. The aerodynamic coefficients obtained from these methods are used for frequency domain flutter calculations involving a typical section blade structural model. An eigenvalue problem is solved for each interblade phase angle mode and the eigenvalues are used to determine aeroelastic stability. Flutter calculations are performed for two examples over a range of subsonic Mach numbers.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Electronic structure calculation by nonlinear optimization: Application to metals
NASA Astrophysics Data System (ADS)
Benedek, R.; Min, B. I.; Woodward, C.; Garner, J.
1988-04-01
There is considerable interest in the development of novel algorithms for the calculation of electronic structure (e.g., at the level of the local-density approximation of density-functional theory). In this paper we consider a first-order equation-of-motion method. Two methods of solution are described, one proposed by Williams and Soler, and the other base on a Born-Dyson series expansion. The extension of the approach to metallic systems is outlined and preliminary numerical calculations for Zintl-phase NaTl are presented.
Turboexpander calculations using a generalized equation of state correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, M.S.; Starling, K.E.
1975-01-01
A generalized method for predicting the thermodynamic properties of natural gas fluids has been developed and tested. The results of several comparisons between thermodynamic property values predicted by the method and experimental data are presented. Comparisons of predicted and experimental vapor-liquid equilibrium are presented. These comparisons indicate that the generalized correlation can be used to predict many thermodynamic properties of natural gas and LNG. Turboexpander calculations are presented to show the utility of the generalized correlation for process design calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
Improved perturbation method for gadolinia worth calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R.T.; Congdon, S.P.
1986-01-01
Gadolinia is utilized in light water power reactors as burnable poison for reserving excess reactivity. Good gadolinia worth estimation is useful for evaluating fuel bundle designs, core operating strategies, and fuel cycle economics. The authors have developed an improved perturbation method based on exact perturbation theory for gadolinia worth calculations in fuel bundles. The method predicts much more accurate gadolinia worth than the first-order perturbation method (commonly used to estimate nuclide worths) for bundles containing fresh or partly burned gadolinia.
NASA Technical Reports Server (NTRS)
Kurmanaliyev, T. I.; Breslavets, A. V.
1974-01-01
The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.
Increasing the volumetric efficiency of Diesel engines by intake pipes
NASA Technical Reports Server (NTRS)
List, Hans
1933-01-01
Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.
Parallel computation of multigroup reactivity coefficient using iterative method
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Development of a Fragment-Based in Silico Profiler for Michael Addition Thiol Reactivity.
Ebbrell, David J; Madden, Judith C; Cronin, Mark T D; Schultz, Terry W; Enoch, Steven J
2016-06-20
The Adverse Outcome Pathway (AOP) paradigm details the existing knowledge that links the initial interaction between a chemical and a biological system, termed the molecular initiating event (MIE), through a series of intermediate events, to an adverse effect. An important example of a well-defined MIE is the formation of a covalent bond between a biological nucleophile and an electrophilic compound. This particular MIE has been associated with various toxicological end points such as acute aquatic toxicity, skin sensitization, and respiratory sensitization. This study has investigated the calculated parameters that are required to predict the rate of chemical bond formation (reactivity) of a dataset of Michael acceptors. Reactivity of these compounds toward glutathione was predicted using a combination of a calculated activation energy value (Eact, calculated using density functional theory (DFT) calculation at the B3YLP/6-31G+(d) level of theory, and solvent-accessible surface area values (SAS) at the α carbon. To further develop the method, a fragment-based algorithm was developed enabling the reactivity to be predicted for Michael acceptors without the need to perform the time-consuming DFT calculations. Results showed the developed fragment method was successful in predicting the reactivity of the Michael acceptors excluding two sets of chemicals: volatile esters with an extended substituent at the β-carbon and chemicals containing a conjugated benzene ring as part of the polarizing group. Additionally the study also demonstrated the ease with which the approach can be extended to other chemical classes by the calculation of additional fragments and their associated Eact and SAS values. The resulting method is likely to be of use in regulatory toxicology tools where an understanding of covalent bond formation as a potential MIE is important within the AOP paradigm.
NASA Technical Reports Server (NTRS)
Halford, G. R.
1983-01-01
The presentation focuses primarily on the progress we at NASA Lewis Research Center have made. The understanding of the phenomenological processes of high temperature fatigue of metals for the purpose of calculating lives of turbine engine hot section components is discussed. Improved understanding resulted in the development of accurate and physically correct life prediction methods such as Strain-Range partitioning for calculating creep fatigue interactions and the Double Linear Damage Rule for predicting potentially severe interactions between high and low cycle fatigue. Examples of other life prediction methods are also discussed. Previously announced in STAR as A83-12159
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
NASA Astrophysics Data System (ADS)
Ouyang, Lizhi
A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was that the Co-OH bond was weak. This, together with the ongoing projects studying different Vitamin B12 derivatives, might help us to answer questions about the Co-C cleavage of the B12 coenzyme, which is involved in many important B12 enzymatic reactions.
Navigating around the algebraic jungle of QCD: efficient evaluation of loop helicity amplitudes
NASA Astrophysics Data System (ADS)
Lam, C. S.
1993-05-01
A method is developed whereby spinor helicity techniques can be used to simlify the calculation of loop amplitudes. This is achieved by using the Feynman-parameter representation where the offending off-shell loop momenta do not appear. Other shortcuts motivated by the Bern-Kosower one-loop string calculations can be incorporated into the formalism. This includes color reorganization into Chan-Paton factors and the use of background Feynman gauge. This method is applicable to any Feynman diagram with any number of loops as long as the external masses can be ignored. In order to minimize the very considerable algebra encountered in non-abelian gauge theories, graphical methods are developed for most of the calculations. This enables the large number of terms encountered to be organized implicitly in the Feynman diagram without the necessity of writing down any of them algebraically. A one-loop four-gluon amplitude in a particular helicity configuration is computed explicitly to illustrate the method.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
Solution of the neutronics code dynamic benchmark by finite element method
NASA Astrophysics Data System (ADS)
Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.
2016-10-01
The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, Aaron
2004-01-01
The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
NASA Astrophysics Data System (ADS)
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
Electronic Structure Calculation of Permanent Magnets using the KKR Green's Function Method
NASA Astrophysics Data System (ADS)
Doi, Shotaro; Akai, Hisazumi
2014-03-01
Electronic structure and magnetic properties of permanent magnetic materials, especially Nd2Fe14B, are investigated theoretically using the KKR Green's function method. Important physical quantities in magnetism, such as magnetic moment, Curie temperature, and anisotropy constant, which are obtained from electronics structure calculations in both cases of atomic-sphere-approximation and full-potential treatment, are compared with past band structure calculations and experiments. The site preference of heavy rare-earth impurities are also evaluated through the calculation of formation energy with the use of coherent potential approximations. Further, the development of electronic structure calculation code using the screened KKR for large super-cells, which is aimed at studying the electronic structure of realistic microstructures (e.g. grain boundary phase), is introduced with some test calculations.
NASA Astrophysics Data System (ADS)
Ding, E. J.
2015-06-01
The time-independent lattice Boltzmann algorithm (TILBA) is developed to calculate the hydrodynamic interactions between two particles in a Stokes flow. The TILBA is distinguished from the traditional lattice Boltzmann method in that a background matrix (BGM) is generated prior to the calculation. The BGM, once prepared, can be reused for calculations for different scenarios, and the computational cost for each such calculation will be significantly reduced. The advantage of the TILBA is that it is easy to code and can be applied to any particle shape without complicated implementation, and the computational cost is independent of the shape of the particle. The TILBA is validated and shown to be accurate by comparing calculation results obtained from the TILBA to analytical or numerical solutions for certain problems.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel
NASA Astrophysics Data System (ADS)
Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa
This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
Słonecka, Iwona; Łukasik, Krzysztof; Fornalski, Krzysztof W
2018-06-04
The present paper proposes two methods of calculating components of the dose absorbed by the human body after exposure to a mixed neutron and gamma radiation field. The article presents a novel approach to replace the common iterative method in its analytical form, thus reducing the calculation time. It also shows a possibility of estimating the neutron and gamma doses when their ratio in a mixed beam is not precisely known.
Icing Branch Current Research Activities in Icing Physics
NASA Technical Reports Server (NTRS)
Vargas, Mario
2009-01-01
Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Next Generation Nuclear Plant Methods Technical Program Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg
2010-12-01
One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope ofmore » the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.« less
Next Generation Nuclear Plant Methods Technical Program Plan -- PLN-2498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard R. Schultz; Abderrafi M. Ougouag; David W. Nigg
2010-09-01
One of the great challenges of designing and licensing the Very High Temperature Reactor (VHTR) is to confirm that the intended VHTR analysis tools can be used confidently to make decisions and to assure all that the reactor systems are safe and meet the performance objectives of the Generation IV Program. The research and development (R&D) projects defined in the Next Generation Nuclear Plant (NGNP) Design Methods Development and Validation Program will ensure that the tools used to perform the required calculations and analyses can be trusted. The Methods R&D tasks are designed to ensure that the calculational envelope ofmore » the tools used to analyze the VHTR reactor systems encompasses, or is larger than, the operational and transient envelope of the VHTR itself. The Methods R&D focuses on the development of tools to assess the neutronic and thermal fluid behavior of the plant. The fuel behavior and fission product transport models are discussed in the Advanced Gas Reactor (AGR) program plan. Various stress analysis and mechanical design tools will also need to be developed and validated and will ultimately also be included in the Methods R&D Program Plan. The calculational envelope of the neutronics and thermal-fluids software tools intended to be used on the NGNP is defined by the scenarios and phenomena that these tools can calculate with confidence. The software tools can only be used confidently when the results they produce have been shown to be in reasonable agreement with first-principle results, thought-problems, and data that describe the “highly ranked” phenomena inherent in all operational conditions and important accident scenarios for the VHTR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
NASA Astrophysics Data System (ADS)
Iwase, Shigeru; Futamura, Yasunori; Imakura, Akira; Sakurai, Tetsuya; Tsukamoto, Shigeru; Ono, Tomoya
2018-05-01
We propose an efficient computational method for evaluating the self-energy matrices of electrodes to study ballistic electron transport properties in nanoscale systems. To reduce the high computational cost incurred in large systems, a contour integral eigensolver based on the Sakurai-Sugiura method combined with the shifted biconjugate gradient method is developed to solve an exponential-type eigenvalue problem for complex wave vectors. A remarkable feature of the proposed algorithm is that the numerical procedure is very similar to that of conventional band structure calculations. We implement the developed method in the framework of the real-space higher-order finite-difference scheme with nonlocal pseudopotentials. Numerical tests for a wide variety of materials validate the robustness, accuracy, and efficiency of the proposed method. As an illustration of the method, we present the electron transport property of the freestanding silicene with the line defect originating from the reversed buckled phases.
An Novel Continuation Power Flow Method Based on Line Voltage Stability Index
NASA Astrophysics Data System (ADS)
Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan
2018-01-01
An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields, volume 3
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The computer programs developed to calculate the shock wave precursor and the method of using them are described. This method calculated the precursor flow field in a nitrogen gas including the effects of emission and absorption of radiation on the energy and composition of gas. The radiative transfer is calculated including the effects of absorption and emission through the line as well as the continuum process in the shock layer and through the continuum processes only in the precursor. The effects of local thermodynamic nonequilibrium in the shock layer and precursor regions are also included in the radiative transfer calculations. Three computer programs utilized by this computational scheme to calculate the precursor flow field solution for a given shock layer flow field are discussed.
Uematsu, Mikio; Kurosawa, Masahiko
2005-01-01
A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities.
Hybrid classical/quantum simulation for infrared spectroscopy of water
NASA Astrophysics Data System (ADS)
Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro
2018-05-01
We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.
Relative loading on biplane wings of unequal chords
NASA Technical Reports Server (NTRS)
Diehl, Walter S
1935-01-01
It is shown that the lift distribution for a biplane with unequal chords may be calculated by the method developed in NACA Technical report no. 458 if corrections are made for the inequality in chord lengths. The method is applied to four cases in which the upper chord was greater than the lower and good agreement is obtained between observed and calculated lift coefficients.
ERIC Educational Resources Information Center
Basak, Tulay; Yildiz, Dilek
2014-01-01
Objective: The aim of this study was to compare the effectiveness of cooperative learning and traditional learning methods on the development of drug-calculation skills. Design: Final-year nursing students ("n" = 85) undergoing internships during the 2010-2011 academic year at a nursing school constituted the study group of this…
A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry
NASA Astrophysics Data System (ADS)
Stamer, Torsten; Inutsuka, Shu-ichiro
2018-06-01
We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.
The impact of heterogeneity in individual frailty on the dynamics of mortality.
Vaupel, J W; Manton, K G; Stallard, E
1979-08-01
Life table methods are developed for populations whose members differ in their endowment for longevity. Unlike standard methods, which ignore such heterogeneity, these methods use different calculations to construct cohort, period, and individual life tables. The results imply that standard methods overestimate current life expectancy and potential gains in life expectancy from health and safety interventions, while underestimating rates of individual aging, past progress in reducing mortality, and mortality differentials between pairs of populations. Calculations based on Swedish mortality data suggest that these errors may be important, especially in old age.
Ranking of options of real estate use by expert assessments mathematical processing
NASA Astrophysics Data System (ADS)
Lepikhina, O. Yu; Skachkova, M. E.; Mihaelyan, T. A.
2018-05-01
The article is devoted to the development of the real estate assessment concept. In conditions of multivariate using of the real estate method based on calculating, the integral indicator of each variant’s efficiency is proposed. In order to calculate weights of criteria of the efficiency expert method, Analytic hierarchy process and its mathematical support are used. The method allows fulfilling ranking of alternative types of real estate use in dependence of their efficiency. The method was applied for one of the land parcels located on Primorsky district in Saint Petersburg.
NASA Astrophysics Data System (ADS)
Shishlov, A. V.; Sagatelyan, G. R.; Shashurin, V. D.
2017-12-01
A mathematical model is proposed to calculate the growth rate of the thin-film coating thickness at various points in a flat substrate surface during planetary motion of the substrate, which makes it possible to calculate an expected coating thickness distribution. Proper software package is developed. The coefficients used for computer simulation are experimentally determined.
Viscous wing theory development. Volume 1: Analysis, method and results
NASA Technical Reports Server (NTRS)
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
A new shielding calculation method for X-ray computed tomography regarding scattered radiation.
Watanabe, Hiroshi; Noto, Kimiya; Shohji, Tomokazu; Ogawa, Yasuyoshi; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Hiraki, Hitoshi; Kida, Tetsuo; Sasanuma, Kazutoshi; Katsunuma, Yasushi; Nakano, Takurou; Horitsugi, Genki; Hosono, Makoto
2017-06-01
The goal of this study is to develop a more appropriate shielding calculation method for computed tomography (CT) in comparison with the Japanese conventional (JC) method and the National Council on Radiation Protection and Measurements (NCRP)-dose length product (DLP) method. Scattered dose distributions were measured in a CT room with 18 scanners (16 scanners in the case of the JC method) for one week during routine clinical use. The radiation doses were calculated for the same period using the JC and NCRP-DLP methods. The mean (NCRP-DLP-calculated dose)/(measured dose) ratios in each direction ranged from 1.7 ± 0.6 to 55 ± 24 (mean ± standard deviation). The NCRP-DLP method underestimated the dose at 3.4% in fewer shielding directions without the gantry and a subject, and the minimum (NCRP-DLP-calculated dose)/(measured dose) ratio was 0.6. The reduction factors were 0.036 ± 0.014 and 0.24 ± 0.061 for the gantry and couch directions, respectively. The (JC-calculated dose)/(measured dose) ratios ranged from 11 ± 8.7 to 404 ± 340. The air kerma scatter factor κ is expected to be twice as high as that calculated with the NCRP-DLP method and the reduction factors are expected to be 0.1 and 0.4 for the gantry and couch directions, respectively. We, therefore, propose a more appropriate method, the Japanese-DLP method, which resolves the issues of possible underestimation of the scattered radiation and overestimation of the reduction factors in the gantry and couch directions.
Thermodynamic evaluation of transonic compressor rotors using the finite volume approach
NASA Technical Reports Server (NTRS)
Nicholson, S.; Moore, J.
1986-01-01
A method was developed which calculates two-dimensional, transonic, viscous flow in ducts. The finite volume, time marching formulation is used to obtain steady flow solutions of the Reynolds-averaged form of the Navier Stokes equations. The entire calculation is performed in the physical domain. The method is currently limited to the calculation of attached flows. The features of the current method can be summarized as follows. Control volumes are chosen so that smoothing of flow properties, typically required for stability, is now needed. Different time steps are used in the different governing equations to improve the convergence speed of the viscous calculations. A new pressure interpolation scheme is introduced which improves the shock capturing ability of the method. A multi-volume method for pressure changes in the boundary layer allows calculations which use very long and thin control volumes. A special discretization technique is also used to stabilize these calculations. A special formulation of the energy equation is used to provide improved transient behavior of solutions which use the full energy equation. The method is then compared with a wide variety of test cases. The freestream Mach numbers range from 0.075 to 2.8 in the calculations. Transonic viscous flow in a converging diverging nozzle is calculated with the method; the Mach number upstream of the shock is approximately 1.25. The agreement between the calculated and measured shock strength and total pressure losses is good. Essentially incompressible turbulent boundary layer flow in a adverse pressure gradient is calculated and the computed distribution of mean velocity and shear stress are in good agreement with the measurements. At the other end of the Mach number range, a flat plate turbulent boundary layer with a freestream Mach number of 2.8 is calculated using the full energy equation; the computed total temperature distribution and recovery factor agree well with the measurements when a variable Prandtl number is used through the boundary layer.
High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME
NASA Astrophysics Data System (ADS)
Otis, Richard A.; Liu, Zi-Kui
2017-05-01
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Development of DPD coarse-grained models: From bulk to interfacial properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solano Canchaya, José G.; Dequidt, Alain, E-mail: alain.dequidt@univ-bpclermont.fr; Goujon, Florent
2016-08-07
A new Bayesian method was recently introduced for developing coarse-grain (CG) force fields for molecular dynamics. The CG models designed for dissipative particle dynamics (DPD) are optimized based on trajectory matching. Here we extend this method to improve transferability across thermodynamic conditions. We demonstrate the capability of the method by developing a CG model of n-pentane from constant-NPT atomistic simulations of bulk liquid phases and we apply the CG-DPD model to the calculation of the surface tension of the liquid-vapor interface over a large range of temperatures. The coexisting densities, vapor pressures, and surface tensions calculated with different CG andmore » atomistic models are compared to experiments. Depending on the database used for the development of the potentials, it is possible to build a CG model which performs very well in the reproduction of the surface tension on the orthobaric curve.« less
Li, Hongzhi; Yang, Wei
2007-03-21
An approach is developed in the replica exchange framework to enhance conformational sampling for the quantum mechanical (QM) potential based molecular dynamics simulations. Importantly, with our enhanced sampling treatment, a decent convergence for electronic structure self-consistent-field calculation is robustly guaranteed, which is made possible in our replica exchange design by avoiding direct structure exchanges between the QM-related replicas and the activated (scaled by low scaling parameters or treated with high "effective temperatures") molecular mechanical (MM) replicas. Although the present approach represents one of the early efforts in the enhanced sampling developments specifically for quantum mechanical potentials, the QM-based simulations treated with the present technique can possess the similar sampling efficiency to the MM based simulations treated with the Hamiltonian replica exchange method (HREM). In the present paper, by combining this sampling method with one of our recent developments (the dual-topology alchemical HREM approach), we also introduce a method for the sampling enhanced QM-based free energy calculations.
Calculation of Water Entry Problem for Free-falling Bodies Using a Developed Cartesian Cut Cell Mesh
NASA Astrophysics Data System (ADS)
Wenhua, Wang; Yanying, Wang
2010-05-01
This paper describes the development of free surface capturing method on Cartesian cut cell mesh to water entry problem for free-falling bodies with body-fluid interaction. The incompressible Euler equations for a variable density fluid system are presented as governing equations and the free surface is treated as a contact discontinuity by using free surface capturing method. In order to be convenient for dealing with the problem with moving body boundary, the Cartesian cut cell technique is adopted for generating the boundary-fitted mesh around body edge by cutting solid regions out of a background Cartesian mesh. Based on this mesh system, governing equations are discretized by finite volume method, and at each cell edge inviscid flux is evaluated by means of Roe's approximate Riemann solver. Furthermore, for unsteady calculation in time domain, a time accurate solution is achieved by a dual time-stepping technique with artificial compressibility method. For the body-fluid interaction, the projection method of momentum equations and exact Riemann solution are applied in the calculation of fluid pressure on the solid boundary. Finally, the method is validated by test case of water entry for free-falling bodies.
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1995-01-01
A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method has been developed in recent years for the analysis of structural mechanics problems. This method treats all independent internal forces as unknown variables that can be calculated by simultaneously imposing equations of equilibrium and compatibility conditions. In this paper a finite element library for analyzing two-dimensional problems by the Integrated Force Method is presented. Triangular- and quadrilateral-shaped elements capable of modeling arbitrary domain configurations are presented. The element equilibrium and flexibility matrices are derived by discretizing the expressions for potential and complementary energies, respectively. The displacement and stress fields within the finite elements are independently approximated. The displacement field is interpolated as it is in the standard displacement method, and the stress field is approximated by using complete polynomials of the correct order. A procedure that uses the definitions of stress components in terms of an Airy stress function is developed to derive the stress interpolation polynomials. Such derived stress fields identically satisfy the equations of equilibrium. Moreover, the resulting element matrices are insensitive to the orientation of local coordinate systems. A method is devised to calculate the number of rigid body modes, and the present elements are shown to be free of spurious zero-energy modes. A number of example problems are solved by using the present library, and the results are compared with corresponding analytical solutions and with results from the standard displacement finite element method. The Integrated Force Method not only gives results that agree well with analytical and displacement method results but also outperforms the displacement method in stress calculations.
Two innovative pore pressure calculation methods for shallow deep-water formations
NASA Astrophysics Data System (ADS)
Deng, Song; Fan, Honghai; Liu, Yuhan; He, Yanfeng; Zhang, Shifeng; Yang, Jing; Fu, Lipei
2017-11-01
There are many geological hazards in shallow formations associated with oil and gas exploration and development in deep-water settings. Abnormal pore pressure can lead to water flow and gas and gas hydrate accumulations, which may affect drilling safety. Therefore, it is of great importance to accurately predict pore pressure in shallow deep-water formations. Experience over previous decades has shown, however, that there are not appropriate pressure calculation methods for these shallow formations. Pore pressure change is reflected closely in log data, particularly for mudstone formations. In this paper, pore pressure calculations for shallow formations are highlighted, and two concrete methods using log data are presented. The first method is modified from an E. Philips test in which a linear-exponential overburden pressure model is used. The second method is a new pore pressure method based on P-wave velocity that accounts for the effect of shallow gas and shallow water flow. Afterwards, the two methods are validated using case studies from two wells in the Yingqiong basin. Calculated results are compared with those obtained by the Eaton method, which demonstrates that the multi-regression method is more suitable for quick prediction of geological hazards in shallow layers.
Modeling of a multileaf collimator
NASA Astrophysics Data System (ADS)
Kim, Siyong
A comprehensive physics model of a multileaf collimator (MLC) field for treatment planning was developed. Specifically, an MLC user interface module that includes a geometric optimization tool and a general method of in- air output factor calculation were developed. An automatic tool for optimization of MLC conformation is needed to realize the potential benefits of MLC. It is also necessary that a radiation therapy treatment planning (RTTP) system is capable of modeling MLC completely. An MLC geometric optimization and user interface module was developed. The planning time has been reduced significantly by incorporating the MLC module into the main RTTP system, Radiation Oncology Computer System (ROCS). The dosimetric parameter that has the most profound effect on the accuracy of the dose delivered with an MLC is the change in the in-air output factor that occurs with field shaping. It has been reported that the conventional method of calculating an in-air output factor cannot be used for MLC shaped fields accurately. Therefore, it is necessary to develop algorithms that allow accurate calculation of the in-air output factor. A generalized solution for an in-air output factor calculation was developed. Three major contributors of scatter to the in-air output-flattening filter, wedge, and tertiary collimator-were considered separately. By virtue of a field mapping method, in which a source plane field determined by detector's eye view is mapped into a detector plane field, no additional dosimetric data acquisition other than the standard data set for a range of square fields is required for the calculation of head scatter. Comparisons of in-air output factors between calculated and measured values show a good agreement for both open and wedge fields. For rectangular fields, a simple equivalent square formula was derived based on the configuration of a linear accelerator treatment head. This method predicts in-air output to within 1% accuracy. A two-effective-source algorithm was developed to account for the effect of source to detector distance on in-air output for wedge fields. Two effective sources, one for head scatter and the other for wedge scatter, were dealt with independently. Calculations provided less than 1% difference of in-air output factors from measurements. This approach offers the best comprehensive accuracy in radiation delivery with field shapes defined using MLC. This generalized model works equally well with fields shaped by any type of tertiary collimator and have the necessary framework to extend its application to intensity modulated radiation therapy.
Performance estimates for the Space Station power system Brayton Cycle compressor and turbine
NASA Technical Reports Server (NTRS)
Cummings, Robert L.
1989-01-01
The methods which have been used by the NASA Lewis Research Center for predicting Brayton Cycle compressor and turbine performance for different gases and flow rates are described. These methods were developed by NASA Lewis during the early days of Brayton cycle component development and they can now be applied to the task of predicting the performance of the Closed Brayton Cycle (CBC) Space Station Freedom power system. Computer programs are given for performing these calculations and data from previous NASA Lewis Brayton Compressor and Turbine tests is used to make accurate estimates of the compressor and turbine performance for the CBC power system. Results of these calculations are also given. In general, calculations confirm that the CBC Brayton Cycle contractor has made realistic compressor and turbine performance estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, S. A., E-mail: volkoff-sergey@mail.ru
2016-06-15
A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.
Postimplant dosimetry using a Monte Carlo dose calculation engine: a new clinical standard.
Carrier, Jean-François; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, André-Guy; Vigneault, Eric; Beaulieu, Luc
2007-07-15
To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. For the clinical target volume (CTV) D(90) parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.
Methods and codes for neutronic calculations of the MARIA research reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrzejewski, K.; Kulikowska, T.; Bretscher, M. M.
2002-02-18
The core of the MARIA high flux multipurpose research reactor is highly heterogeneous. It consists of beryllium blocks arranged in 6 x 8 matrix, tubular fuel assemblies, control rods and irradiation channels. The reflector is also heterogeneous and consists of graphite blocks clad with aluminum. Its structure is perturbed by the experimental beam tubes. This paper presents methods and codes used to calculate the MARIA reactor neutronics characteristics and experience gained thus far at IAE and ANL. At ANL the methods of MARIA calculations were developed in connection with the RERTR program. At IAE the package of programs was developedmore » to help its operator in optimization of fuel utilization.« less
Prediction of distribution coefficient from structure. 1. Estimation method.
Csizmadia, F; Tsantili-Kakoulidou, A; Panderi, I; Darvas, F
1997-07-01
A method has been developed for the estimation of the distribution coefficient (D), which considers the microspecies of a compound. D is calculated from the microscopic dissociation constants (microconstants), the partition coefficients of the microspecies, and the counterion concentration. A general equation for the calculation of D at a given pH is presented. The microconstants are calculated from the structure using Hammett and Taft equations. The partition coefficients of the ionic microspecies are predicted by empirical equations using the dissociation constants and the partition coefficient of the uncharged species, which are estimated from the structure by a Linear Free Energy Relationship method. The algorithm is implemented in a program module called PrologD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708
We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
NASA Astrophysics Data System (ADS)
Dias, L. G.; Shimizu, K.; Farah, J. P. S.; Chaimovich, H.
2002-09-01
We propose and demonstrate the usefulness of a method, defined as generalized Born electronegativity equalization method (GBEEM) to estimate solvent-induced charge redistribution. The charges obtained by GBEEM, in a representative series of small organic molecules, were compared to PM3-CM1 charges in vacuum and in water. Linear regressions with appropriate correlation coefficients and standard deviations between GBEEM and PM3-CM1 methods were obtained ( R=0.94,SD=0.15, Ftest=234, N=32, in vacuum; R=0.94,SD=0.16, Ftest=218, N=29, in water). In order to test the GBEEM response when intermolecular interactions are involved we calculated a water dimer in dielectric water using both GBEEM and PM3-CM1 and the results were similar. Hence, the method developed here is comparable to established calculation methods.
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
NASA Technical Reports Server (NTRS)
Moore, E. N.; Altick, P. L.
1972-01-01
The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.
Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS
NASA Astrophysics Data System (ADS)
Templin, R. J.
1985-03-01
Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.
Numerical calculation of the internal flow field in a centrifugal compressor impeller
NASA Technical Reports Server (NTRS)
Walitt, L.; Harp, J. L., Jr.; Liu, C. Y.
1975-01-01
An iterative numerical method has been developed for the calculation of steady, three-dimensional, viscous, compressible flow fields in centrifugal compressor impellers. The computer code, which embodies the method, solves the steady three dimensional, compressible Navier-Stokes equations in rotating, curvilinear coordinates. The solution takes place on blade-to-blade surfaces of revolution which move from the hub to the shroud during each iteration.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
Rapid calculation of genomic evaluations for new animals
USDA-ARS?s Scientific Manuscript database
A method was developed to calculate preliminary genomic evaluations daily or weekly before the release of official monthly evaluations by processing only newly genotyped animals using estimates of SNP effects from the previous official evaluation. To minimize computing time, reliabilities and genomi...
A simplified method for calculating temperature time histories in cryogenic wind tunnels
NASA Technical Reports Server (NTRS)
Stallings, R. L., Jr.; Lamb, M.
1976-01-01
Average temperature time history calculations of the test media and tunnel walls for cryogenic wind tunnels have been developed. Results are in general agreement with limited preliminary experimental measurements obtained in a 13.5-inch pilot cryogenic wind tunnel.
James E. Smith; Linda S. Heath; Kenneth E. Skog; Richard A. Birdsey
2006-01-01
This study presents techniques for calculating average net annual additions to carbon in forests and in forest products. Forest ecosystem carbon yield tables, representing stand-level merchantable volume and carbon pools as a function of stand age, were developed for 51 forest types within 10 regions of the United States. Separate tables were developed for...
Detection and Tracking of Moving Targets Behind Cluttered Environments Using Compressive Sensing
NASA Astrophysics Data System (ADS)
Dang, Vinh Quang
Detection and tracking of moving targets (target's motion, vibration, etc.) in cluttered environments have been receiving much attention in numerous applications, such as disaster search-and-rescue, law enforcement, urban warfare, etc. One of the popular techniques is the use of stepped frequency continuous wave radar due to its low cost and complexity. However, the stepped frequency radar suffers from long data acquisition time. This dissertation focuses on detection and tracking of moving targets and vibration rates of stationary targets behind cluttered medium such as wall using stepped frequency radar enhanced by compressive sensing. The application of compressive sensing enables the reconstruction of the target space using fewer random frequencies, which decreases the acquisition time. Hardware-accelerated parallelization on GPU is investigated for the Orthogonal Matching Pursuit reconstruction algorithm. For simulation purpose, two hybrid methods have been developed to calculate the scattered fields from the targets through the wall approaching the antenna system, and to convert the incoming fields into voltage signals at terminals of the receive antenna. The first method is developed based on the plane wave spectrum approach for calculating the scattered fields of targets behind the wall. The method uses Fast Multiple Method (FMM) to calculate scattered fields on a particular source plane, decomposes them into plane wave components, and propagates the plane wave spectrum through the wall by integrating wall transmission coefficients before constructing the fields on a desired observation plane. The second method allows one to calculate the complex output voltage at terminals of a receiving antenna which fully takes into account the antenna effects. This method adopts the concept of complex antenna factor in Electromagnetic Compatibility (EMC) community for its calculation.
Evaluation of the photoionization probability of H2+ by the trajectory semiclassical method
NASA Astrophysics Data System (ADS)
Arkhipov, D. N.; Astashkevich, S. A.; Mityureva, A. A.; Smirnov, V. V.
2018-07-01
The trajectory-based method for calculating the probabilities of transitions in the quantum system developed in our previous works and tested for atoms is applied to calculating the photoionization probability for the simplest molecule - hydrogen molecular ion. In a weak field it is established a good agreement between our photoionization cross section and the data obtained by other theoretical methods for photon energy in the range from one-photon ionization threshold up to 25 a.u. Photoionization cross section in the range 25 < ω ≤ 100 a.u. was calculated for the first time judging by the literature known to us. It is also confirmed that the trajectory method works in a wide range of the field magnitudes including superatomic values up to relativistic intensity.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
Rapid calculation method for Frenkel-type two-exciton states in one to three dimensions
NASA Astrophysics Data System (ADS)
Ajiki, Hiroshi
2014-07-01
Biexciton and two-exciton dissociated states of Frenkel-type excitons are well described by a tight-binding model with a nearest-neighbor approximation. Such two-exciton states in a finite-size lattice are usually calculated by numerical diagonalization of the Hamiltonian, which requires an increasing amount of computational time and memory as the lattice size increases. I develop here a rapid, memory-saving method to calculate the energies and wave functions of two-exciton states by employing a bisection method. In addition, an attractive interaction between two excitons in the tight-binding model can be obtained directly so that the biexciton energy agrees with the observed energy, without the need for the trial-and-error procedure implemented in the numerical diagonalization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoogcarspel, S J; Kontaxis, C; Velden, J M van der
2014-06-01
Purpose: To develop an MR accelerator-enabled online planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases. The technical challenges include; automated stereotactic treatment planning, online MR-based dose calculation and MR guidance during treatment. Methods: Using the CT data of 20 patients previously treated at our institution, a class solution for automated treatment planning for spinal bone metastases was created. For accurate dose simulation right before treatment, we fused geometrically correct online MR data with pretreatment CT data of the target volume (TV). For target tracking during treatment, a dynamic T2-weighted TSE MR sequence was developed. An in house developedmore » GPU based IMRT optimization and dose calculation algorithm was used for fast treatment planning and simulation. An automatically generated treatment plan developed with this treatment planning system was irradiated on a clinical 6 MV linear accelerator and evaluated using a Delta4 dosimeter. Results: The automated treatment planning method yielded clinically viable plans for all patients. The MR-CT fusion based dose calculation accuracy was within 2% as compared to calculations performed with original CT data. The dynamic T2-weighted TSE MR Sequence was able to provide an update of the anatomical location of the TV every 10 seconds. Dose calculation and optimization of the automatically generated treatment plans using only one GPU took on average 8 minutes. The Delta4 measurement of the irradiated plan agreed with the dose calculation with a 3%/3mm gamma pass rate of 86.4%. Conclusions: The development of an MR accelerator-enabled planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases was presented. Future work will involve developing an intrafraction motion adaptation strategy, MR-only dose calculation, radiotherapy quality-assurance in a magnetic field, and streamlining the entire treatment process on an MR accelerator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
Electromagnetic Scattering from Realistic Targets
NASA Technical Reports Server (NTRS)
Lee, Shung- Wu; Jin, Jian-Ming
1997-01-01
The general goal of the project is to develop computational tools for calculating radar signature of realistic targets. A hybrid technique that combines the shooting-and-bouncing-ray (SBR) method and the finite-element method (FEM) for the radiation characterization of microstrip patch antennas in a complex geometry was developed. In addition, a hybridization procedure to combine moment method (MoM) solution and the SBR method to treat the scattering of waveguide slot arrays on an aircraft was developed. A list of journal articles and conference papers is included.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Development of Quantum Chemical Method to Calculate Half Maximal Inhibitory Concentration (IC50 ).
Bag, Arijit; Ghorai, Pradip Kr
2016-05-01
Till date theoretical calculation of the half maximal inhibitory concentration (IC50 ) of a compound is based on different Quantitative Structure Activity Relationship (QSAR) models which are empirical methods. By using the Cheng-Prusoff equation it may be possible to compute IC50 , but this will be computationally very expensive as it requires explicit calculation of binding free energy of an inhibitor with respective protein or enzyme. In this article, for the first time we report an ab initio method to compute IC50 of a compound based only on the inhibitor itself where the effect of the protein is reflected through a proportionality constant. By using basic enzyme inhibition kinetics and thermodynamic relations, we derive an expression of IC50 in terms of hydrophobicity, electric dipole moment (μ) and reactivity descriptor (ω) of an inhibitor. We implement this theory to compute IC50 of 15 HIV-1 capsid inhibitors and compared them with experimental results and available other QASR based empirical results. Calculated values using our method are in very good agreement with the experimental values compared to the values calculated using other methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calculation of unsteady airfoil loads with and without flap deflection at -90 degrees incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1991-01-01
A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This unique method provides for the direct solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body-fitted computational mesh incorporating a staggered grid method. The vorticity is determined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and for the conservation of mass at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the conservation of mass to machine zero at each time-step. The results of the present analysis and experimental results obtained for a XV-15 airfoil are compared. The comparisons indicate that the calculated drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results. Comparisons of the numerical results of the present method for several airfoils demonstrate the significant influence of airfoil curvature and flap deflection on the predicted download.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
How to determine spiral bevel gear tooth geometry for finite element analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.; Litvin, Faydor L.
1991-01-01
An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.
Identification of fracture zones and its application in automatic bone fracture reduction.
Paulano-Godino, Félix; Jiménez-Delgado, Juan J
2017-04-01
The preoperative planning of bone fractures using information from CT scans increases the probability of obtaining satisfactory results, since specialists are provided with additional information before surgery. The reduction of complex bone fractures requires solving a 3D puzzle in order to place each fragment into its correct position. Computer-assisted solutions may aid in this process by identifying the number of fragments and their location, by calculating the fracture zones or even by computing the correct position of each fragment. The main goal of this paper is the development of an automatic method to calculate contact zones between fragments and thus to ease the computation of bone fracture reduction. In this paper, an automatic method to calculate the contact zone between two bone fragments is presented. In a previous step, bone fragments are segmented and labelled from CT images and a point cloud is generated for each bone fragment. The calculated contact zones enable the automatic reduction of complex fractures. To that end, an automatic method to match bone fragments in complex fractures is also presented. The proposed method has been successfully applied in the calculation of the contact zone of 4 different bones from the ankle area. The calculated fracture zones enabled the reduction of all the tested cases using the presented matching algorithm. The performed tests show that the reduction of these fractures using the proposed methods leaded to a small overlapping between fragments. The presented method makes the application of puzzle-solving strategies easier, since it does not obtain the entire fracture zone but the contact area between each pair of fragments. Therefore, it is not necessary to find correspondences between fracture zones and fragments may be aligned two by two. The developed algorithms have been successfully applied in different fracture cases in the ankle area. The small overlapping error obtained in the performed tests demonstrates the absence of visual overlapping in the figures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
New wrinkles on black hole perturbations: Numerical treatment of acoustic and gravitational waves
NASA Astrophysics Data System (ADS)
Tenyotkin, Valery
2009-06-01
This thesis develops two main topics. A full relativistic calculation of quasinormal modes of an acoustic black hole is carried out. The acoustic black hole is formed by a perfect, inviscid, relativistic, ideal gas that is spherically accreting onto a Schwarzschild black hole. The second major part is the calculation of sourceless vector (electromagnetic) and tensor (gravitational) covariant field evolution equations for perturbations on a Schwarzschild background using the relatively recent [Special characters omitted.] decomposition method. Scattering calculations are carried out in Schwarzschild coordinates for electromagnetic and gravitational cases as validation of the method and the derived equations.
NASA Astrophysics Data System (ADS)
Zhumagulov, Yaroslav V.; Krasavin, Andrey V.; Kashurnikov, Vladimir A.
2018-05-01
The method is developed for calculation of electronic properties of an ensemble of metal nanoclusters with the use of cluster perturbation theory. This method is applied to the system of gold nanoclusters. The Greens function of single nanocluster is obtained by ab initio calculations within the framework of the density functional theory, and then is used in Dyson equation to group nanoclusters together and to compute the Greens function as well as the electron density of states of the whole ensemble. The transition from insulator state of a single nanocluster to metallic state of bulk gold is observed.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
NASA Astrophysics Data System (ADS)
Ahmad, Zeeshan; Viswanathan, Venkatasubramanian
2016-08-01
Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.
Shiraogawa, Takafumi; Ehara, Masahiro; Jurinovich, Sandro; Cupellini, Lorenzo; Mennucci, Benedetta
2018-06-15
Recently, a method to calculate the absorption and circular dichroism (CD) spectra based on the exciton coupling has been developed. In this work, the method was utilized for the decomposition of the CD and circularly polarized luminescence (CPL) spectra of a multichromophoric system into chromophore contributions for recently developed through-space conjugated oligomers. The method which has been implemented using rotatory strength in the velocity form and therefore it is gauge-invariant, enables us to evaluate the contribution from each chromophoric unit and locally excited state to the CD and CPL spectra of the total system. The excitonic calculations suitably reproduce the full calculations of the system, as well as the experimental results. We demonstrate that the interactions between electric transition dipole moments of adjacent chromophoric units are crucial in the CD and CPL spectra of the multichromophoric systems, while the interactions between electric and magnetic transition dipole moments are not negligible. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.
2014-01-01
Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.
NASA Astrophysics Data System (ADS)
Tarumi, Moto; Nakai, Hiromi
2018-05-01
This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.
CREME96 and Related Error Rate Prediction Methods
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.
NASA Langley developments in response calculations needed for failure and life prediction
NASA Technical Reports Server (NTRS)
Housner, Jerrold M.
1993-01-01
NASA Langley developments in response calculations needed for failure and life predictions are discussed. Topics covered include: structural failure analysis in concurrent engineering; accuracy of independent regional modeling demonstrated on classical example; functional interface method accurately joins incompatible finite element models; interface method for insertion of local detail modeling extended to curve pressurized fuselage window panel; interface concept for joining structural regions; motivation for coupled 2D-3D analysis; compression panel with discontinuous stiffener coupled 2D-3D model and axial surface strains at the middle of the hat stiffener; use of adaptive refinement with multiple methods; adaptive mesh refinement; and studies on quantity effect of bow-type initial imperfections on reliability of stiffened panels.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Electron tunneling in proteins program.
Hagras, Muhammad A; Stuchebrukhov, Alexei A
2016-06-05
We developed a unique integrated software package (called Electron Tunneling in Proteins Program or ETP) which provides an environment with different capabilities such as tunneling current calculation, semi-empirical quantum mechanical calculation, and molecular modeling simulation for calculation and analysis of electron transfer reactions in proteins. ETP program is developed as a cross-platform client-server program in which all the different calculations are conducted at the server side while only the client terminal displays the resulting calculation outputs in the different supported representations. ETP program is integrated with a set of well-known computational software packages including Gaussian, BALLVIEW, Dowser, pKip, and APBS. In addition, ETP program supports various visualization methods for the tunneling calculation results that assist in a more comprehensive understanding of the tunneling process. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Calculation of recoil implantation profiles using known range statistics
NASA Technical Reports Server (NTRS)
Fung, C. D.; Avila, R. E.
1985-01-01
A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.
Sensitivity analysis of discrete structural systems: A survey
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.
1984-01-01
Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Bahdanovich, Rynat; Pham, Phu
2017-09-01
Precise calculation of energy release in a nuclear reactor is necessary to obtain the correct spatial power distribution and predict characteristics of burned nuclear fuel. In this work, previously developed method for calculation neutron-capture reactions - capture component - contribution in effective energy release in a fuel core of nuclear reactor is discussed. The method was improved and implemented to the different models of VVER-1000 reactor developed for MCU 5 and MCNP 4 computer codes. Different models of equivalent cell and fuel assembly in the beginning of fuel cycle were calculated. These models differ by the geometry, fuel enrichment and presence of burnable absorbers. It is shown, that capture component depends on fuel enrichment and presence of burnable absorbers. Its value varies for different types of hot fuel assemblies from 3.35% to 3.85% of effective energy release. Average capture component contribution in effective energy release for typical serial fresh fuel of VVER-1000 is 3.5%, which is 7 MeV/fission. The method will be used in future to estimate the dependency of capture energy on fuel density, burn-up, etc.
Uranium phase diagram from first principles
NASA Astrophysics Data System (ADS)
Yanilkin, Alexey; Kruglov, Ivan; Migdal, Kirill; Oganov, Artem; Pokatashkin, Pavel; Sergeev, Oleg
2017-06-01
The work is devoted to the investigation of uranium phase diagram up to pressure of 1 TPa and temperature of 15 kK based on density functional theory. First of all the comparison of pseudopotential and full potential calculations is carried out for different uranium phases. In the second step, phase diagram at zero temperature is investigated by means of program USPEX and pseudopotential calculations. Stable and metastable structures with close energies are selected. In order to obtain phase diagram at finite temperatures the preliminary selection of stable phases is made by free energy calculation based on small displacement method. For remaining candidates the accurate values of free energy are obtained by means of thermodynamic integration method (TIM). For this purpose quantum molecular dynamics are carried out at different volumes and temperatures. Interatomic potentials based machine learning are developed in order to consider large systems and long times for TIM. The potentials reproduce the free energy with the accuracy 1-5 meV/atom, which is sufficient for prediction of phase transitions. The equilibrium curves of different phases are obtained based on free energies. Melting curve is calculated by modified Z-method with developed potential.
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1984-01-01
Methods for measuring the lateral forces, occurring as a result of asymmetric nozzle flow separation, are discussed. The effect of some parameters on the side load is explained. A new method was developed for calculation of the side load. The values calculated are compared with side load data of the J-2 engine. Results are used for predicting side loads of the space shuttle main engine.
Pseudopotential for ab initio calculations of uranium compounds
NASA Astrophysics Data System (ADS)
Smirnov, G. S.; Pisarev, V. V.; Stegailov, V. V.
2018-01-01
The density functional theory (DFT) is a research tool of the highest importance for electronic structure calculations. It is often the only affordable method for ab initio calculations of complex materials. The pseudopotential approach allows reducing the total number of electrons in the model that speeds up calculations. However, there is a lack of pseudopotentials for heavy elements suitable for condensed matter DFT models. In this work, we present a pseudopotential for uranium developed in the Goedecker-Teter-Hutter form. Its accuracy is illustrated using several molecular and solid-state calculations.
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Bartolino, James R.
2007-01-01
A numerical flow model of the Spokane Valley-Rathdrum Prairie aquifer currently (2007) being developed requires the input of values for areally-distributed recharge, a parameter that is often the most uncertain component of water budgets and ground-water flow models because it is virtually impossible to measure over large areas. Data from six active weather stations in and near the study area were used in four recharge-calculation techniques or approaches; the Langbein method, in which recharge is estimated on the basis of empirical data from other basins; a method developed by the U.S. Department of Agriculture (USDA), in which crop consumptive use and effective precipitation are first calculated and then subtracted from actual precipitation to yield an estimate of recharge; an approach developed as part of the Eastern Snake Plain Aquifer Model (ESPAM) Enhancement Project in which recharge is calculated on the basis of precipitation-recharge relations from other basins; and an approach in which reference evapotranspiration is calculated by the Food and Agriculture Organization (FAO) Penman-Monteith equation, crop consumptive use is determined (using a single or dual coefficient approach), and recharge is calculated. Annual recharge calculated by the Langbein method for the six weather stations was 4 percent of annual mean precipitation, yielding the lowest values of the methods discussed in this report, however, the Langbein method can be only applied to annual time periods. Mean monthly recharge calculated by the USDA method ranged from 53 to 73 percent of mean monthly precipitation. Mean annual recharge ranged from 64 to 69 percent of mean annual precipitation. Separate mean monthly recharge calculations were made with the ESPAM method using initial input parameters to represent thin-soil, thick-soil, and lava-rock conditions. The lava-rock parameters yielded the highest recharge values and the thick-soil parameters the lowest. For thin-soil parameters, calculated monthly recharge ranged from 10 to 29 percent of mean monthly precipitation and annual recharge ranged from 16 to 23 percent of mean annual precipitation. For thick-soil parameters, calculated monthly recharge ranged from 1 to 5 percent of mean monthly precipitation and mean annual recharge ranged from 2 to 4 percent of mean annual precipitation. For lava-rock parameters, calculated mean monthly recharge ranged from 37 to 57 percent of mean monthly precipitation and mean annual recharge ranged from 45 to 52 percent of mean annual precipitation. Single-coefficient (crop coefficient) FAO Penman-Monteith mean monthly recharge values were calculated for Spokane Weather Service Office (WSO) Airport, the only station for which the necessary meteorological data were available. Grass-referenced values of mean monthly recharge ranged from 0 to 81 percent of mean monthly precipitation and mean annual recharge was 21 percent of mean annual precipitation; alfalfa-referenced values of mean monthly recharge ranged from 0 to 85 percent of mean monthly precipitation and mean annual recharge was 24 percent of mean annual precipitation. Single-coefficient FAO Penman-Monteith calculations yielded a mean monthly recharge of zero during the eight warmest and driest months of the year (March-October). In order to refine the mean monthly recharge estimates, dual-coefficient (basal crop and soil evaporation coefficients) FAO Penman-Monteith dual-crop evapotranspiration and deep-percolation calculations were applied to daily values from the Spokane WSO Airport for January 1990 through December 2005. The resultant monthly totals display a temporal variability that is absent from the mean monthly values and demonstrate that the daily amount and timing of precipitation dramatically affect calculated recharge. The dual-coefficient FAO Penman-Monteith calculations were made for the remaining five stations using wind-speed values for Spokane WSO Airport and other assumptions regarding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
The first principle calculation of two-dimensional Dirac materials
NASA Astrophysics Data System (ADS)
Lu, Jin
2017-12-01
As the size of integrated device becoming increasingly small, from the last century, semiconductor industry is facing the enormous challenge to break the Moore’s law. The development of calculation, communication and automatic control have emergent expectation of new materials at the aspect of semiconductor industrial technology and science. In spite of silicon device, searching the alternative material with outstanding electronic properties has always been a research point. As the discovery of graphene, the research of two-dimensional Dirac material starts to express new vitality. This essay studied the development calculation of 2D material’s mobility and introduce some detailed information of some approximation method of the first principle calculation.
Recycling of car tires by means of Waterjet technologies
NASA Astrophysics Data System (ADS)
Holka, Henryk; Jarzyna, Tomasz
2017-03-01
An increasing number of used car tires poses a threat to the environment. Therefore they need to be recycled. In this work a decomposition method that involves applying a stream of water at very high pressure (to 600MPa) is presented. This method is based on the authors' own patent from 2010 and the results have been provided from two year-long tests and calculations This study includes many diagrams, images and calculations that have been used to develop the discussed method which is competitive for currently used ones.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, J.R.; Minor, J.E.; Mehta, K.C.
1975-06-01
In order to evaluate the ability of critical facilities at the Nevada Test Site to withstand the possible damaging effects of extreme winds and tornadoes, parameters for the effects of tornadoes and extreme winds and structural design criteria for the design and evaluation of structures were developed. The meteorological investigations conducted are summarized, and techniques used for developing the combined tornado and extreme wind risk model are discussed. The guidelines for structural design include methods for calculating pressure distributions on walls and roofs of structures and methods for accommodating impact loads from wind-driven missiles. Calculations for determining the design loadsmore » for an example structure are included. (LCL)« less
Channel flow analysis. [velocity distribution throughout blade flow field
NASA Technical Reports Server (NTRS)
Katsanis, T.
1973-01-01
The design of a proper blade profile requires calculation of the blade row flow field in order to determine the velocities on the blade surfaces. An analysis theory is presented for several methods used for this calculation and associated computer programs that were developed are discussed.
Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation
NASA Technical Reports Server (NTRS)
Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)
2000-01-01
Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.
Quasi solution of radiation transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
There is uncertainty with experimental data as well as with input data of theoretical calculations. The neutron distribution from the variational principle, which takes into account both theoretical and experimental data, is obtained to increase the accuracy and speed of neutronic calculations. The neutron imbalance in mesh cells and the discrepancy between experimentally measured and calculated functional of the neutron distribution are simultaneously minimized. A fast-working and simple-programming iteration method is developed to minimize the objective functional. The method can be used in the core monitoring and control system for (a) power distribution calculations, (b) in- and ex-core detector calibration,more » (c) macro-cross sections or isotope distribution correction by experimental data, and (d) core and detector diagnostics.« less
Turbulent boundary layer on the surface of a sea geophysical antenna
NASA Astrophysics Data System (ADS)
Smol'Yakov, A. V.
2010-11-01
A theory is constructed that makes it possible to calculate the initial parameters necessary for calculating the hydrodynamic (turbulent) noise, which is a handicap to the operation of sea geophysical antennas. Algorithms are created for calculating the profile and defect of the average speed, displacement thickness, momentum thickness, and friction resistance in a turbulent boundary layer on a cylinder in its axial flow. Results of calculations using the developed theory are compared to experimental data. As the diameter of the cylinder tends to infinity, all relations of the theory pass to known relations for the boundary layer on a flat plate. The developed theory represents the initial stage of creating a method to calculate hydrodynamic noise, which is handicap to the operation of sea geophysical antennas.
Aerodynamics Characteristics of Multi-Element Airfoils at -90 Degrees Incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.; Schmitz, Fredric H. (Technical Monitor)
1994-01-01
A developed method has been applied to calculate accurately the viscous flow about airfoils normal to the free-stream flow. This method has special application to the analysis of tilt rotor aircraft in the evaluation of download. In particular, the flow about an XV-15 airfoil with and without deflected leading and trailing edge flaps at -90 degrees incidence is evaluated. The multi-element aspect of the method provides for the evaluation of slotted flap configurations which may lead to decreased drag. The method solves for turbulent flow at flight Reynolds numbers. The flow about the XV-15 airfoil with and without flap deflections has been calculated and compared with experimental data at a Reynolds number of one million. The comparison between the calculated and measured pressure distributions are very good, thereby, verifying the method. The aerodynamic evaluation of multielement airfoils will be conducted to determine airfoil/flap configurations for reduced airfoil drag. Comparisons between the calculated lift, drag and pitching moment on the airfoil and the airfoil surface pressure will also be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashjaee, M.; Roomina, M.R.; Ghafouri-Azar, R.
1993-05-01
Two computational methods for calculating hourly, daily, and monthly average values of direct, diffuse, and global solar radiation on horizontal collectors have been presented in this article for location with different latitude, altitude, and atmospheric conditions in Iran. These methods were developed using two different independent sets of measured data from the Iranian Meteorological Organization (IMO) for two cities in Iran (Tehran and Isfahan) during 14 years of measurement for Tehran and 4 years of measurement for Isfahan. Comparison of calculated monthly average global solar radiation, using the two models for Tehran and Isfahan with measured data from the IMO,more » has indicated a good agreement between them. Then these developed methods were extended to another location (city of Bandar-Abbas), where measured data are not available. But the work of Daneshyar predicts its monthly global radiation. The maximum discrepancy of 7% between the developed models and the work of Daneshyar was observed.« less
Pilot-in-the-Loop CFD Method Development
2015-10-31
Contract # N00014-14-C-0020 Pilot-in-the-Loop CFD Method Development Progress Report (CDRL A001) Progress Report for Period: Aug 1, 2015 to...30-10-2015 4. TITLE AND SUBTITLE Pilot-in-the-Loop CFD Method Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...environment. For example, ship airwake are calculated using CFD solutions without the presence of the helicopter main rotor. The gusts from the turbulent
Performance Test Data Analysis of Scintillation Cameras
NASA Astrophysics Data System (ADS)
Demirkaya, Omer; Mazrou, Refaat Al
2007-10-01
In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.
An inverse method for the aerodynamic design of three-dimensional aircraft engine nacelles
NASA Technical Reports Server (NTRS)
Bell, R. A.; Cedar, R. D.
1991-01-01
A fast, efficient and user friendly inverse design system for 3-D nacelles was developed. The system is a product of a 2-D inverse design method originally developed at NASA-Langley and the CFL3D analysis code which was also developed at NASA-Langley and modified for nacelle analysis. The design system uses a predictor/corrector design approach in which an analysis code is used to calculate the flow field for an initial geometry, the geometry is then modified based on the difference between the calculated and target pressures. A detailed discussion of the design method, the process of linking it to the modified CFL3D solver and its extension to 3-D is presented. This is followed by a number of examples of the use of the design system for the design of both axisymmetric and 3-D nacelles.
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
NASA Astrophysics Data System (ADS)
Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.
2018-03-01
The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.
Cosmic strings and the microwave sky. I - Anisotropy from moving strings
NASA Technical Reports Server (NTRS)
Stebbins, Albert
1988-01-01
A method is developed for calculating the component of the microwave anisotropy around cosmic string loops due to their rapidly changing gravitational fields. The method is only valid for impact parameters from the string much smaller than the horizon size at the time the photon passes the string. The method makes it possible to calculate the temperature pattern around arbitrary string configurations numerically in terms of one-dimensional integrals. This method is applied to temperature jump across a string, confirming and extending previous work. It is also applied to cusps and kinks on strings, and to determining the temperature pattern far from a strong loop. The temperature pattern around a few loop configurations is explicitly calculated. Comparisons with the work of Brandenberger et al. (1986) indicates that they have overestimated the MBR anisotropy from gravitational radiation emitted from loops.
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, P.
1986-01-01
Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Development and Validation of a New Fallout Transport Method Using Variable Spectral Winds
NASA Astrophysics Data System (ADS)
Hopkins, Arthur Thomas
A new method has been developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds, to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using specgtral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud. Further validation was performed by comparing computed and actual trajectories of a high explosive dust cloud (DIRECT COURSE). Using an error propagation formula, it was determined that uncertainties in spectral wind components produce less than four percent of the total dose rate variance. In summary, this research demonstrated the feasibility of using spectral coefficients for fallout transport calculations, developed a two-step smearing model to treat variable winds, and showed that uncertainties in spectral winds do not contribute significantly to the error in computed dose rate.
Orbital dependent functionals: An atom projector augmented wave method implementation
NASA Astrophysics Data System (ADS)
Xu, Xiao
This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.
Nielsen, Jens E.; Gunner, M. R.; Bertrand García-Moreno, E.
2012-01-01
The pKa Cooperative http://www.pkacoop.org was organized to advance development of accurate and useful computational methods for structure-based calculation of pKa values and electrostatic energy in proteins. The Cooperative brings together laboratories with expertise and interest in theoretical, computational and experimental studies of protein electrostatics. To improve structure-based energy calculations it is necessary to better understand the physical character and molecular determinants of electrostatic effects. The Cooperative thus intends to foment experimental research into fundamental aspects of proteins that depend on electrostatic interactions. It will maintain a depository for experimental data useful for critical assessment of methods for structure-based electrostatics calculations. To help guide the development of computational methods the Cooperative will organize blind prediction exercises. As a first step, computational laboratories were invited to reproduce an unpublished set of experimental pKa values of acidic and basic residues introduced in the interior of staphylococcal nuclease by site-directed mutagenesis. The pKa values of these groups are unique and challenging to simulate owing to the large magnitude of their shifts relative to normal pKa values in water. Many computational methods were tested in this 1st Blind Prediction Challenge and critical assessment exercise. A workshop was organized in the Telluride Science Research Center to assess objectively the performance of many computational methods tested on this one extensive dataset. This volume of PROTEINS: Structure, Function, and Bioinformatics introduces the pKa Cooperative, presents reports submitted by participants in the blind prediction challenge, and highlights some of the problems in structure-based calculations identified during this exercise. PMID:22002877
Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W
2017-01-01
Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Calculation of parameters of combined frame and roof bolting
NASA Astrophysics Data System (ADS)
Ivanov, S. I.; Titov, N. V.; Privalov, A. A.; Trunov, I. T.; Sarychev, V. I.
2017-10-01
The paper presents the method of calculation of the combined frame and roof bolting. Recommendations on providing joint operation of roof bolting with steel support frames are given. Graphs for determining standard rock movement, as well as for defining proof load on the yielding support, were developed.
New applications of renormalization group methods in nuclear physics.
Furnstahl, R J; Hebeler, K
2013-12-01
We review recent developments in the use of renormalization group (RG) methods in low-energy nuclear physics. These advances include enhanced RG technology, particularly for three-nucleon forces, which greatly extends the reach and accuracy of microscopic calculations. We discuss new results for the nucleonic equation of state with applications to astrophysical systems such as neutron stars, new calculations of the structure and reactions of finite nuclei, and new explorations of correlations in nuclear systems.
NASA Astrophysics Data System (ADS)
Belov, A. V.; Kurkov, Andrei S.; Chikolini, A. V.
1989-02-01
A method was developed for calculating the effective cutoff length, the size of a mode spot, and the chromatic dispersion over the profile of the refractive index (measured in the preform stage) of single-mode fiber waveguides with a depressed cladding. The results of such calculations are shown to agree with the results of measurements of these quantities.
Advanced Computational Methods for Monte Carlo Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R
2012-10-09
One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.
Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres
NASA Astrophysics Data System (ADS)
Liu, Quanhua; Weng, Fuzhong
2006-12-01
The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.
Pair production in low-energy collisions of uranium nuclei beyond the monopole approximation
NASA Astrophysics Data System (ADS)
Maltsev, I. A.; Shabaev, V. M.; Tupitsyn, I. I.; Kozhedub, Y. S.; Plunien, G.; Stöhlker, Th.
2017-10-01
A method for calculation of electron-positron pair production in low-energy heavy-ion collisions beyond the monopole approximation is presented. The method is based on the numerical solving of the time-dependent Dirac equation with the full two-center potential. The one-electron wave functions are expanded in the finite basis set constructed on the two-dimensional spatial grid. Employing the developed approach the probabilities of bound-free pair production are calculated for collisions of bare uranium nuclei at the energy near the Coulomb barrier. The obtained results are compared with the corresponding values calculated in the monopole approximation.
Design Criteria for Low Profile Flange Calculations
NASA Technical Reports Server (NTRS)
Leimbach, K. R.
1973-01-01
An analytical method and a design procedure to develop flanged separable pipe connectors are discussed. A previously established algorithm is the basis for calculating low profile flanges. The characteristics and advantages of the low profile flange are analyzed. The use of aluminum, titanium, and plastics for flange materials is described. Mathematical models are developed to show the mechanical properties of various flange configurations. A computer program for determining the structural stability of the flanges is described.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Nuclear shape evolution based on microscopic level densities
Ward, D. E.; Carlsson, B. G.; Døssing, T.; ...
2017-02-27
Here, by combining microscopically calculated level densities with the Metropolis walk method, we develop a consistent framework for treating the energy and angular-momentum dependence of the nuclear shape evolution in the fission process. For each nucleus under consideration, the level density is calculated microscopically for each of more than five million shapes with a recently developed combinatorial method. The method employs the same single-particle levels as those used for the extraction of the pairing and shell contributions to the macroscopic-microscopic deformation-energy surface. Containing no new parameters, the treatment is suitable for elucidating the energy dependence of the dynamics of warmmore » nuclei on pairing and shell effects. It is illustrated for the fission fragment mass distribution for several uranium and plutonium isotopes of particular interest.« less
Integrated optics to improve resolution on multiple configuration
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei
2015-04-01
Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.
A method for determining spiral-bevel gear tooth geometry for finite element analysis
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.; Litvin, Faydor L.
1991-01-01
An analytical method was developed to determine gear tooth surface coordinates of face-milled spiral bevel gears. The method uses the basic gear design parameters in conjunction with the kinematical aspects of spiral bevel gear manufacturing machinery. A computer program, SURFACE, was developed. The computer program calculates the surface coordinates and outputs 3-D model data that can be used for finite element analysis. Development of the modeling method and an example case are presented. This analysis method could also find application for gear inspection and near-net-shape gear forging die design.
Using MCBEND for neutron or gamma-ray deterministic calculations
NASA Astrophysics Data System (ADS)
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
Drama in Dynamics: Boom, Splash, and Speed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Netzloff, Heather Marie
2004-12-19
The full nature of chemistry and physics cannot be captured by static calculations alone. Dynamics calculations allow the simulation of time-dependent phenomena. This facilitates both comparisons with experimental data and the prediction and interpretation of details not easily obtainable from experiments. Simulations thus provide a direct link between theory and experiment, between microscopic details of a system and macroscopic observed properties. Many types of dynamics calculations exist. The most important distinction between the methods and the decision of which method to use can be described in terms of the size and type of molecule/reaction under consideration and the type andmore » level of accuracy required in the final properties of interest. These considerations must be balanced with available computational codes and resources as simulations to mimic ''real-life'' may require many time steps. As indicated in the title, the theme of this thesis is dynamics. The goal is to utilize the best type of dynamics for the system under study while trying to perform dynamics in the most accurate way possible. As a quantum chemist, this involves some level of first principles calculations by default. Very accurate calculations of small molecules and molecular systems are now possible with relatively high-level ab initio quantum chemistry. For example, a quantum chemical potential energy surface (PES) can be developed ''on-the-fly'' with dynamic reaction path (DRP) methods. In this way a classical trajectory is developed without prior knowledge of the PES. In order to treat solvation processes and the condensed phase, large numbers of molecules are required, especially in predicting bulk behavior. The Effective Fragment Potential (EFP) method for solvation decreases the cost of a fully quantum mechanical calculation by dividing a chemical system into an ab initio region that contains the solute and an ''effective fragment'' region that contains the remaining solvent molecules. But, despite the reduced cost relative to fully QM calculations, the EFP method, due to its complex, QM-based potential, does require more computation time than simple interaction potentials, especially when the method is used for large scale molecular dynamics simulations. Thus, the EFP method was parallelized to facilitate these calculations within the quantum chemistry program GAMESS. The EFP method provides relative energies and structures that are in excellent agreement with the analogous fully quantum results for small water clusters. The ability of the method to predict bulk water properties with a comparable accuracy is assessed by performing EFP molecular dynamics simulations. Molecular dynamics simulations can provide properties that are directly comparable with experimental results, for example radial distribution functions. The molecular PES is a fundamental starting point for chemical reaction dynamics. Many methods can be used to obtain a PES; for example, assuming a global functional form for the PES or, as mentioned above, performing ''on-the-fly'' dynamics with Al or semi-empirical calculations at every molecular configuration. But as the size of the system grows, using electronic structure theory to build a PES and, therefore, study reaction dynamics becomes virtually impossible. The program Grow builds a PES as an interpolation of Al data; the goal is to attempt to produce an accurate PES with the smallest number of Al calculations. The Grow-GAMESS interface was developed to obtain the Al data from GAMESS. Classical or quantum dynamics can be performed on the resulting surface. The interface includes the novel capability to build multi-reference PESs; these types of calculations are applicable to problems ranging from atmospheric chemistry to photochemical reaction mechanisms in organic and inorganic chemistry to fundamental biological phenomena such as photosynthesis.« less
NASA Astrophysics Data System (ADS)
Arce, Julio Cesar
1992-01-01
This work focuses on time-dependent quantum theory and methods for the study of the spectra and dynamics of atomic and molecular systems. Specifically, we have addressed the following two problems: (i) Development of a time-dependent spectral method for the construction of spectra of simple quantum systems--This includes the calculation of eigenenergies, the construction of bound and continuum eigenfunctions, and the calculation of photo cross-sections. Computational applications include the quadrupole photoabsorption spectra and dissociation cross-sections of molecular hydrogen from various vibrational states in its ground electronic potential -energy curve. This method is seen to provide an advantageous alternative, both from the computational and conceptual point of view, to existing standard methods. (ii) Explicit time-dependent formulation of photoabsorption processes --Analytical solutions of the time-dependent Schrodinger equation are constructed and employed for the calculation of probability densities, momentum distributions, fluxes, transition rates, expectation values and correlation functions. These quantities are seen to establish the link between the dynamics and the calculated, or measured, spectra and cross-sections, and to clarify the dynamical nature of the excitation, transition and ejection processes. Numerical calculations on atomic and molecular hydrogen corroborate and complement the previous results, allowing the identification of different regimes during the photoabsorption process.
Inelastic transport theory from first principles: Methodology and application to nanoscale devices
NASA Astrophysics Data System (ADS)
Frederiksen, Thomas; Paulsson, Magnus; Brandbyge, Mads; Jauho, Antti-Pekka
2007-05-01
We describe a first-principles method for calculating electronic structure, vibrational modes and frequencies, electron-phonon couplings, and inelastic electron transport properties of an atomic-scale device bridging two metallic contacts under nonequilibrium conditions. The method extends the density-functional codes SIESTA and TRANSIESTA that use atomic basis sets. The inelastic conductance characteristics are calculated using the nonequilibrium Green’s function formalism, and the electron-phonon interaction is addressed with perturbation theory up to the level of the self-consistent Born approximation. While these calculations often are computationally demanding, we show how they can be approximated by a simple and efficient lowest order expansion. Our method also addresses effects of energy dissipation and local heating of the junction via detailed calculations of the power flow. We demonstrate the developed procedures by considering inelastic transport through atomic gold wires of various lengths, thereby extending the results presented in Frederiksen [Phys. Rev. Lett. 93, 256601 (2004)]. To illustrate that the method applies more generally to molecular devices, we also calculate the inelastic current through different hydrocarbon molecules between gold electrodes. Both for the wires and the molecules our theory is in quantitative agreement with experiments, and characterizes the system-specific mode selectivity and local heating.
Density Functional O(N) Calculations
NASA Astrophysics Data System (ADS)
Ordejón, Pablo
1998-03-01
We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.
NASA Astrophysics Data System (ADS)
Suponenkovs, Artjoms; Glazs, Aleksandrs; Platkajis, Ardis
2017-03-01
The aim of this paper is to describe the new methods for analyzing knee articular cartilage degeneration. The most important aspects regarding research about magnetic resonance imaging, knee joint anatomy, stages of knee osteoarthritis, medical image segmentation and relaxation times calculation. This paper proposes new methods for relaxation times calculation and medical image segmentation. The experimental part describes the most important aspect regarding analysing of articular cartilage relaxation times changing. This part contains experimental results, which show the codependence between relaxation times and organic structure. These experimental results and proposed methods can be helpful for early osteoarthritis diagnostics.
NASA Astrophysics Data System (ADS)
Dementjev, Aleksandr S.; Jovaisa, A.; Silko, Galina; Ciegis, Raimondas
2005-11-01
Based on the developed efficient numerical methods for calculating the propagation of light beams, the alternative methods for measuring the beam radius and propagation ratio proposed in the international standard ISO 11146 are analysed. The specific calculations of the alternative beam propagation ratios Mi2 performed for a number of test beams with a complicated spatial structure showed that the correlation coefficients ci used in the international standard do not establish the universal one-to-one relation between the alternative propagation ratios Mi2 and invariant propagation ratios Mσ2 found by the method of moments.
Relative Displacement Method for Track-Structure Interaction
Ramos, Óscar Ramón; Pantaleón, Marcos J.
2014-01-01
The track-structure interaction effects are usually analysed with conventional FEM programs, where it is difficult to implement the complex track-structure connection behaviour, which is nonlinear, elastic-plastic and depends on the vertical load. The authors developed an alternative analysis method, which they call the relative displacement method. It is based on the calculation of deformation states in single DOF element models that satisfy the boundary conditions. For its solution, an iterative optimisation algorithm is used. This method can be implemented in any programming language or analysis software. A comparison with ABAQUS calculations shows a very good result correlation and compliance with the standard's specifications. PMID:24634610
Ramjan, Lucie M; Stewart, Lyn; Salamonson, Yenna; Morris, Maureen M; Armstrong, Lyn; Sanchez, Paula; Flannery, Liz
2014-03-01
It remains a grave concern that many nursing students within tertiary institutions continue to experience difficulties with achieving medication calculation competency. In addition, universities have a moral responsibility to prepare proficient clinicians for graduate practice. This requires risk management strategies to reduce adverse medication errors post registration. To identify strategies and potential predictors that may assist nurse academics to tailor their drug calculation teaching and assessment methods. This project builds on previous experience and explores students' perceptions of newly implemented interventions designed to increase confidence and competence in medication calculation. This mixed method study surveyed students (n=405) enrolled in their final semester of study at a large, metropolitan university in Sydney, Australia. Tailored, contextualised interventions included online practice quizzes, simulated medication calculation scenarios developed for clinical practice classes, contextualised 'pen and paper' tests, visually enhanced didactic remediation and 'hands-on' contextualised workshops. Surveys were administered to students to determine their perceptions of interventions and to identify whether these interventions assisted with calculation competence. Test scores were analysed using SPSS v. 20 for correlations between students' perceptions and actual performance. Qualitative open-ended survey questions were analysed manually and thematically. The study reinforced that nursing students preferred a 'hands-on,' contextualised approach to learning that was 'authentic' and aligned with clinical practice. Our interventions assisted with supporting students' learning and improvement of calculation confidence. Qualitative data provided further insight into students' awareness of their calculation errors and preferred learning styles. Some of the strongest predictors for numeracy skill performance included (1) being an international student, (2) completion of an online practice quiz, scoring 59% or above and (3) students' self-reported confidence. A paradigm shift from traditional testing methods to the implementation of intensive, contextualised numeracy teaching and assessment within tertiary institutions will enhance learning and promote best teaching practices. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kyllmar, K.; Mårtensson, K.; Johnsson, H.
2005-03-01
A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.
New method for estimating arterial pulse wave velocity at single site.
Abdessalem, Khaled Ben; Flaud, Patrice; Zobaidi, Samir
2018-01-01
The clinical importance of measuring local pulse wave velocity (PWV), has encouraged researchers to develop several local methods to estimate it. In this work, we proposed a new method, the sum-of-squares method [Formula: see text], that allows the estimations of PWV by using simultaneous measurements of blood pressure (P) and arterial diameter (D) at single-location. Pulse waveforms generated by: (1) two-dimensional (2D) fluid-structure interaction simulation (FSI) in a compliant tube, (2) one-dimensional (1D) model of 55 larger human systemic arteries and (3) experimental data were used to validate the new formula and evaluate several classical methods. The performance of the proposed method was assessed by comparing its results to theoretical PWV calculated from the parameters of the model and/or to PWV estimated by several classical methods. It was found that values of PWV obtained by the developed method [Formula: see text] are in good agreement with theoretical ones and with those calculated by PA-loop and D 2 P-loop. The difference between the PWV calculated by [Formula: see text] and PA-loop does not exceed 1% when data from simulations are used, 3% when in vitro data are used and 5% when in vivo data are used. In addition, this study suggests that estimated PWV from arterial pressure and diameter waveforms provide correct values while methods that require flow rate (Q) and velocity (U) overestimate or underestimate PWV.
NASA Technical Reports Server (NTRS)
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Semiclassical Calculation of Reaction Rate Constants for Homolytical Dissociations
NASA Technical Reports Server (NTRS)
Cardelino, Beatriz H.
2002-01-01
There is growing interest in extending organometallic chemical vapor deposition (OMCVD) to III-V materials that exhibit large thermal decomposition at their optimum growth temperature, such as indium nitride. The group III nitrides are candidate materials for light-emitting diodes and semiconductor lasers operating into the blue and ultraviolet regions. To overcome decomposition of the deposited compound, the reaction must be conducted at high pressures, which causes problems of uniformity. Microgravity may provide the venue for maintaining conditions of laminar flow under high pressure. Since the selection of optimized parameters becomes crucial when performing experiments in microgravity, efforts are presently geared to the development of computational OMCVD models that will couple the reactor fluid dynamics with its chemical kinetics. In the present study, we developed a method to calculate reaction rate constants for the homolytic dissociation of III-V compounds for modeling OMCVD. The method is validated by comparing calculations with experimental reaction rate constants.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M
2016-10-01
To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.
Weeks, Keith W; Clochesy, John M; Hutton, B Meriel; Moseley, Laurie
2013-03-01
Advancing the art and science of education practice requires a robust evaluation of the relationship between students' exposure to learning and assessment environments and the development of their cognitive competence (knowing that and why) and functional competence (know-how and skills). Healthcare education translation research requires specific education technology assessments and evaluations that consist of quantitative analyses of empirical data and qualitative evaluations of the lived student experience of the education journey and schemata construction (Weeks et al., 2013a). This paper focuses on the outcomes of UK PhD and USA post-doctorate experimental research. We evaluated the relationship between exposure to traditional didactic methods of education, prototypes of an authentic medication dosage calculation problem-solving (MDC-PS) environment and nursing students' construction of conceptual and calculation competence in medication dosage calculation problem-solving skills. Empirical outcomes from both UK and USA programmes of research identified highly significant differences in the construction of conceptual and calculation competence in MDC-PS following exposure to the authentic learning environment to that following exposure to traditional didactic transmission methods of education (p < 0.001). This research highlighted that for many students exposure to authentic learning environments is an essential first step in the development of conceptual and calculation competence and relevant schemata construction (internal representations of the relationship between the features of authentic dosage problems and calculation functions); and how authentic environments more ably support all cognitive (learning) styles in mathematics than traditional didactic methods of education. Functional competence evaluations are addressed in Macdonald et al. (2013) and Weeks et al. (2013e). Copyright © 2012. Published by Elsevier Ltd.
Strong-coupling Bose polarons out of equilibrium: Dynamical renormalization-group approach
NASA Astrophysics Data System (ADS)
Grusdt, Fabian; Seetharam, Kushal; Shchadilova, Yulia; Demler, Eugene
2018-03-01
When a mobile impurity interacts with a surrounding bath of bosons, it forms a polaron. Numerous methods have been developed to calculate how the energy and the effective mass of the polaron are renormalized by the medium for equilibrium situations. Here, we address the much less studied nonequilibrium regime and investigate how polarons form dynamically in time. To this end, we develop a time-dependent renormalization-group approach which allows calculations of all dynamical properties of the system and takes into account the effects of quantum fluctuations in the polaron cloud. We apply this method to calculate trajectories of polarons following a sudden quench of the impurity-boson interaction strength, revealing how the polaronic cloud around the impurity forms in time. Such trajectories provide additional information about the polaron's properties which are challenging to extract directly from the spectral function measured experimentally using ultracold atoms. At strong couplings, our calculations predict the appearance of trajectories where the impurity wavers back at intermediate times as a result of quantum fluctuations. Our method is applicable to a broader class of nonequilibrium problems. As a check, we also apply it to calculate the spectral function and find good agreement with experimental results. At very strong couplings, we predict that quantum fluctuations lead to the appearance of a dark continuum with strongly suppressed spectral weight at low energies. While our calculations start from an effective Fröhlich Hamiltonian describing impurities in a three-dimensional Bose-Einstein condensate, we also calculate the effects of additional terms in the Hamiltonian beyond the Fröhlich paradigm. We demonstrate that the main effect of these additional terms on the attractive side of a Feshbach resonance is to renormalize the coupling strength of the effective Fröhlich model.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Higuchi Dimension of Digital Images
Ahammer, Helmut
2011-01-01
There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854
A method for estimating mount isolations of powertrain mounting systems
NASA Astrophysics Data System (ADS)
Qin, Wu; Shangguan, Wen-Bin; Luo, Guohai; Xie, Zhengchao
2018-07-01
A method for calculating isolation ratios of mounts at a powertrain mounting systems (PMS) is proposed assuming a powertrain as a rigid body and using the identified powertrain excitation forces and the measured IPI (input point inertance) of mounting points at the body side. With measured accelerations of mounts at powertrain and body sides of one Vehicle (Vehicle A), the excitation forces of a powertrain are identified using conversational method firstly. Another Vehicle (Vehicle B) has the same powertrain as that of Vehicle A, but with different body and mount configuration. The accelerations of mounts at powertrain side of a PMS on Vehicle B are calculated using the powertrain excitation forces identified from Vehicle A. The identified forces of the powertrain are validated by comparing the calculated and the measured accelerations of mounts at the powertrain side of the powertrain on Vehicle B. A method for calculating acceleration of mounting point at body side for Vehicle B is presented using the identified powertrain excitation forces and the measured IPI at a connecting point between car body and mount. Using the calculated accelerations of mounts at powertrain side and body side at different directions, the isolation ratios of a mount are then estimated. The isolation ratios are validated using the experiment, which verified the proposed methods for estimating isolation ratios of mounts. The developed method is beneficial for optimizing mount stiffness to meet mount isolation requirements before prototype.
USDA-ARS?s Scientific Manuscript database
Kinetic energy of water droplets has a substantial effect on development of a soil surface seal and infiltration rate of bare soil. Methods for measuring sprinkler droplet size and velocity needed to calculate droplet kinetic energy have been developed and tested over the past 50 years, each with ad...
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Calculation of far-field scattering from nonspherical particles using a geometrical optics approach
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1991-01-01
A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.
Development of a neural network technique for KSTAR Thomson scattering diagnostics.
Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun
2016-11-01
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.
EVALUATION OF THE CARBON FOOTPRINT OF AN INNOVATIVE SEWER REHABILITATION METHOD - abstract
A benefit of trenchless methods touted by many practitioners when compared to open cut construction is lower carbon dioxide emissions. In an attempt to verify these claims, tools have been developed that calculate the environmental impact of traditional open cut methods and commo...
EVALUATION OF THE CARBON FOOTPRINT OF AN INNOVATIVE SEWER REHABILITATION METHOD
A benefit of trenchless methods touted by many practitioners when compared to open cut construction is lower carbon dioxide emissions. In an attempt to verify these claims, tools have been developed that calculate the environmental impact of traditional open cut methods and commo...
New method: calculation of magnification factor from an intracardiac marker.
Cha, S D; Incarvito, J; Maranhao, V
1983-01-01
In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.
Average value of the shape and direction factor in the equation of refractive index
NASA Astrophysics Data System (ADS)
Zhang, Tao
2017-10-01
The theoretical calculation of the refractive indices is of great significance for the developments of new optical materials. The calculation method of refractive index, which was deduced from the electron-cloud-conductor model, contains the shape and direction factor 〈g〉. 〈g〉 affects the electromagnetic-induction energy absorbed by the electron clouds, thereby influencing the refractive indices. It is not yet known how to calculate 〈g〉 value of non-spherical electron clouds. In this paper, 〈g〉 value is derived by imaginatively dividing the electron cloud into numerous little volume elements and then regrouping them. This paper proves that 〈g〉 = 2/3 when molecules’ spatial orientations distribute randomly. The calculations of the refractive indices of several substances validate this equation. This result will help to promote the application of the calculation method of refractive index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less
Miksys, N; Xu, C; Beaulieu, L; Thomson, R M
2015-08-07
This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.
NASA Astrophysics Data System (ADS)
Lee, Choonik; Jung, Jae Won; Pelletier, Christopher; Pyakuryal, Anil; Lamart, Stephanie; Kim, Jong Oh; Lee, Choonsik
2015-03-01
Organ dose estimation for retrospective epidemiological studies of late effects in radiotherapy patients involves two challenges: radiological images to represent patient anatomy are not usually available for patient cohorts who were treated years ago, and efficient dose reconstruction methods for large-scale patient cohorts are not well established. In the current study, we developed methods to reconstruct organ doses for radiotherapy patients by using a series of computational human phantoms coupled with a commercial treatment planning system (TPS) and a radiotherapy-dedicated Monte Carlo transport code, and performed illustrative dose calculations. First, we developed methods to convert the anatomy and organ contours of the pediatric and adult hybrid computational phantom series to Digital Imaging and Communications in Medicine (DICOM)-image and DICOM-structure files, respectively. The resulting DICOM files were imported to a commercial TPS for simulating radiotherapy and dose calculation for in-field organs. The conversion process was validated by comparing electron densities relative to water and organ volumes between the hybrid phantoms and the DICOM files imported in TPS, which showed agreements within 0.1 and 2%, respectively. Second, we developed a procedure to transfer DICOM-RT files generated from the TPS directly to a Monte Carlo transport code, x-ray Voxel Monte Carlo (XVMC) for more accurate dose calculations. Third, to illustrate the performance of the established methods, we simulated a whole brain treatment for the 10 year-old male phantom and a prostate treatment for the adult male phantom. Radiation doses to selected organs were calculated using the TPS and XVMC, and compared to each other. Organ average doses from the two methods matched within 7%, whereas maximum and minimum point doses differed up to 45%. The dosimetry methods and procedures established in this study will be useful for the reconstruction of organ dose to support retrospective epidemiological studies of late effects in radiotherapy patients.
Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran
2014-03-01
Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.
Evaluation of various thrust calculation techniques on an F404 engine
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1990-01-01
In support of performance testing of the X-29A aircraft at the NASA-Ames, various thrust calculation techniques were developed and evaluated for use on the F404-GE-400 engine. The engine was thrust calibrated at NASA-Lewis. Results from these tests were used to correct the manufacturer's in-flight thrust program to more accurately calculate thrust for the specific test engine. Data from these tests were also used to develop an independent, simplified thrust calculation technique for real-time thrust calculation. Comparisons were also made to thrust values predicted by the engine specification model. Results indicate uninstalled gross thrust accuracies on the order of 1 to 4 percent for the various in-flight thrust methods. The various thrust calculations are described and their usage, uncertainty, and measured accuracies are explained. In addition, the advantages of a real-time thrust algorithm for flight test use and the importance of an accurate thrust calculation to the aircraft performance analysis are described. Finally, actual data obtained from flight test are presented.
Snowden, Lonnie R.; Padgett, Courtenay; Saldana, Lisa; Roles, Jennifer; Holmes, Lisa; Ward, Harriet; Soper, Jean; Reid, John; Landsverk, John
2015-01-01
In decisions to adopt and implement new practices or innovations in child welfare, costs are often a bottom-line consideration. The cost calculator, a method developed in England that can be used to calculate unit costs of core case work activities and associated administrative costs, is described as a potentially helpful tool for assisting child welfare administrators to evaluate the costs of current practices relative to their outcomes and could impact decisions about whether to implement new practices. The process by which the cost calculator is being adapted for use in US child welfare systems in two states is described and an illustration of using the method to compare two intervention approaches is provided. PMID:20976620
Chamberlain, Patricia; Snowden, Lonnie R; Padgett, Courtenay; Saldana, Lisa; Roles, Jennifer; Holmes, Lisa; Ward, Harriet; Soper, Jean; Reid, John; Landsverk, John
2011-01-01
In decisions to adopt and implement new practices or innovations in child welfare, costs are often a bottom-line consideration. The cost calculator, a method developed in England that can be used to calculate unit costs of core case work activities and associated administrative costs, is described as a potentially helpful tool for assisting child welfare administrators to evaluate the costs of current practices relative to their outcomes and could impact decisions about whether to implement new practices. The process by which the cost calculator is being adapted for use in US child welfare systems in two states is described and an illustration of using the method to compare two intervention approaches is provided.
NASA Technical Reports Server (NTRS)
Shu, J. Y.
1983-01-01
Two different singularity methods have been utilized to calculate the potential flow past a three dimensional non-lifting body. Two separate FORTRAN computer programs have been developed to implement these theoretical models, which will in the future allow inclusion of the fuselage effect in a pair of existing subcritical wing design computer programs. The first method uses higher order axial singularity distributions to model axisymmetric bodies of revolution in an either axial or inclined uniform potential flow. Use of inset of the singularity line away from the body for blunt noses, and cosine-type element distributions have been applied to obtain the optimal results. Excellent agreement to five significant figures with the exact solution pressure coefficient value has been found for a series of ellipsoids at different angles of attack. Solutions obtained for other axisymmetric bodies compare well with available experimental data. The second method utilizes distributions of singularities on the body surface, in the form of a discrete vortex lattice. This program is capable of modeling arbitrary three dimensional non-lifting bodies. Much effort has been devoted to finding the optimal method of calculating the tangential velocity on the body surface, extending techniques previously developed by other workers.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
NASA Astrophysics Data System (ADS)
Weinheimer, Oliver; Wielpütz, Mark O.; Konietzke, Philip; Heussel, Claus P.; Kauczor, Hans-Ulrich; Brochhausen, Christoph; Hollemann, David; Savage, Dasha; Galbán, Craig J.; Robinson, Terry E.
2017-02-01
Cystic Fibrosis (CF) results in severe bronchiectasis in nearly all cases. Bronchiectasis is a disease where parts of the airways are permanently dilated. The development and the progression of bronchiectasis is not evenly distributed over the entire lungs - rather, individual functional units are affected differently. We developed a fully automated method for the precise calculation of lobe-based airway taper indices. To calculate taper indices, some preparatory algorithms are needed. The airway tree is segmented, skeletonized and transformed to a rooted acyclic graph. This graph is used to label the airways. Then a modified version of the previously validated integral based method (IBM) for airway geometry determination is utilized. The rooted graph, the airway lumen and wall information are then used to calculate the airway taper indices. Using a computer-generated phantom simulating 10 cross sections of airways we present results showing a high accuracy of the modified IBM. The new taper index calculation method was applied to 144 volumetric inspiratory low-dose MDCT scans. The scans were acquired from 36 children with mild CF at 4 time-points (baseline, 3 month, 1 year, 2 years). We found a moderate correlation with the visual lobar Brody bronchiectasis scores by three raters (r2 = 0.36, p < .0001). The taper index has the potential to be a precise imaging biomarker but further improvements are needed. In combination with other imaging biomarkers, taper index calculation can be an important tool for monitoring the progression and the individual treatment of patients with bronchiectasis.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
NASA Astrophysics Data System (ADS)
Zhang, H.; Thurber, C.; Wang, W.; Roecker, S. W.
2008-12-01
We extended our recent development of double-difference seismic tomography [Zhang and Thurber, BSSA, 2003] to the use of station-pair residual differences in addition to event-pair residual differences. Tomography using station- pair residual differences is somewhat akin to teleseismic tomography but with the sources contained within the model region. Synthetic tests show that the inversion using both event- and station-pair residual differences has advantages in terms of more accurately recovering higher-resolution structure in both the source and receiver regions. We used the Spherical-Earth Finite-Difference (SEFD) travel time calculation method in the tomographic system. The basic concept is the extension of a standard Cartesian FD travel time algorithm [Vidale, 1990] to the spherical case by developing a mesh in radius, co-latitude, and longitude, expressing the FD derivatives in a form appropriate to the spherical mesh, and constructing"stencil" to calculate extrapolated travel times. The SEFD travel time calculation method is more advantageous in dealing with heterogeneity and sphericity of the Earth than the simple Earth flattening transformation and the"sphere-in-a-bo" approach [Flanagan et al., 2007]. We applied this method to the Sichuan, China data set for the period of 2001 to 2004. The Vp, Vs and Vp/Vs models show that there is a clear contrast across the Longmenshan Fault, where the 2008 M8 Wenchuan earthquake initiated.
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro
1996-09-01
We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.
The development of android - based children's nutritional status monitoring system
NASA Astrophysics Data System (ADS)
Suryanto, Agus; Paramita, Octavianti; Pribadi, Feddy Setio
2017-03-01
The calculation of BMI (Body Mass Index) is one of the methods to calculate the nutritional status of a person. The BMI calculation has not yet widely understood and known by the public. In addition, people should know the importance of progress in the development of child nutrition each month. Therefore, an application to determine the nutritional status of children based on Android was developed in this study. This study restricted the calculation for children with the age of 0-60 months. The application can run on a smartphone or tablet PC with android operating system due to the rapid development of a smartphone or tablet PC with android operating system and many people own and use it. The aim of this study was to produce a android app to calculate of nutritional status of children. This study was Research and Development (R & D), with a design approach using experimental studies. The steps in this study included analyzing the formula of the Body Mass Index (BMI) and developing the initial application with the help of a computer that includes the design and manufacture of display using Eclipse software. This study resulted in android application that can be used to calculate the nutritional status of children with the age 0-60 months. The results of MES or the error calculation analysis using body mass index formula was 0. In addition, the results of MAPE percentage was 0%. It shows that there is no error in the calculation of the application based on the BMI formula. The smaller value of MSE and MAPE leads to higher level of accuracy.
Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, C.E.
Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.
NASA Astrophysics Data System (ADS)
Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.
2017-11-01
We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.
Apnea Detection Method for Cheyne-Stokes Respiration Analysis on Newborn
NASA Astrophysics Data System (ADS)
Niimi, Taiga; Itoh, Yushi; Natori, Michiya; Aoki, Yoshimitsu
2013-04-01
Cheyne-Stokes respiration is especially prevalent in preterm newborns, but its severity may not be recognized. It is characterized by apnea and cyclical weakening and strengthening of the breathing. We developed a method for detecting apnea and this abnormal respiration and for estimating its malignancy. Apnea was detected based on a "difference" feature (calculated from wavelet coefficients) and a modified maximum displacement feature (related to the respiratory waveform shape). The waveform is calculated from vertical motion of the thoracic and abdominal region during respiration using a vision sensor. Our proposed detection method effectively detects apnea (sensitivity 88.4%, specificity 99.7%).
NASA Astrophysics Data System (ADS)
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.
Anna, Bluszcz
Nowadays methods of measurement and assessment of the level of sustained development at the international, national and regional level are a current research problem, which requires multi-dimensional analysis. The relative assessment of the sustainability level of the European Union member states and the comparative analysis of the position of Poland relative to other countries was the aim of the conducted studies in the article. EU member states were treated as objects in the multi-dimensional space. Dimensions of space were specified by ten diagnostic variables describing the sustainability level of UE countries in three dimensions, i.e., social, economic and environmental. Because the compiled statistical data were expressed in different units of measure, taxonomic methods were used for building an aggregated measure to assess the level of sustainable development of EU member states, which through normalisation of variables enabled the comparative analysis between countries. Methodology of studies consisted of eight stages, which included, among others: defining data matrices, calculating the variability coefficient for all variables, which variability coefficient was under 10 %, division of variables into stimulants and destimulants, selection of the method of variable normalisation, developing matrices of normalised data, selection of the formula and calculating the aggregated indicator of the relative level of sustainable development of the EU countries, calculating partial development indicators for three studies dimensions: social, economic and environmental and the classification of the EU countries according to the relative level of sustainable development. Statistical date were collected based on the Polish Central Statistical Office publication.
Transonic flow analysis for rotors. Part 3: Three-dimensional, quasi-steady, Euler calculation
NASA Technical Reports Server (NTRS)
Chang, I-Chung
1990-01-01
A new method is presented for calculating the quasi-steady transonic flow over a lifting or non-lifting rotor blade in both hover and forward flight by using Euler equations. The approach is to solve Euler equations in a rotor-fixed frame of reference using a finite volume method. A computer program was developed and was then verified by comparison with wind-tunnel data. In all cases considered, good agreement was found with published experimental data.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
MIMO nonlinear ultrasonic tomography by propagation and backpropagation method.
Dong, Chengdong; Jin, Yuanwei
2013-03-01
This paper develops a fast ultrasonic tomographic imaging method in a multiple-input multiple-output (MIMO) configuration using the propagation and backpropagation (PBP) method. By this method, ultrasonic excitation signals from multiple sources are transmitted simultaneously to probe the objects immersed in the medium. The scattering signals are recorded by multiple receivers. Utilizing the nonlinear ultrasonic wave propagation equation and the received time domain scattered signals, the objects are to be reconstructed iteratively in three steps. First, the propagation step calculates the predicted acoustic potential data at the receivers using an initial guess. Second, the difference signal between the predicted value and the measured data is calculated. Third, the backpropagation step computes updated acoustical potential data by backpropagating the difference signal to the same medium computationally. Unlike the conventional PBP method for tomographic imaging where each source takes turns to excite the acoustical field until all the sources are used, the developed MIMO-PBP method achieves faster image reconstruction by utilizing multiple source simultaneous excitation. Furthermore, we develop an orthogonal waveform signaling method using a waveform delay scheme to reduce the impact of speckle patterns in the reconstructed images. By numerical experiments we demonstrate that the proposed MIMO-PBP tomographic imaging method results in faster convergence and achieves superior imaging quality.
Wear Calculation Approach for Sliding - Friction Pairs
NASA Astrophysics Data System (ADS)
Springis, G.; Rudzitis, J.; Lungevics, J.; Berzins, K.
2017-05-01
One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.
Calculation Of Pneumatic Attenuation In Pressure Sensors
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.
1991-01-01
Errors caused by attenuation of air-pressure waves in narrow tubes calculated by method based on fundamental equations of flow. Changes in ambient pressure transmitted along narrow tube to sensor. Attenuation of high-frequency components of pressure wave calculated from wave equation derived from Navier-Stokes equations of viscous flow in tube. Developed to understand and compensate for frictional attenuation in narrow tubes used to connect aircraft pressure sensors with pressure taps on affected surfaces.
Electron transport in extended carbon-nanotube/metal contacts: Ab initio based Green function method
NASA Astrophysics Data System (ADS)
Fediai, Artem; Ryndyk, Dmitry A.; Cuniberti, Gianaurelio
2015-04-01
We have developed a new method that is able to predict the electrical properties of the source and drain contacts in realistic carbon nanotube field effect transistors (CNTFETs). It is based on large-scale ab initio calculations combined with a Green function approach. For the first time, both internal and external parts of a realistic CNT-metal contact are taken into account at the ab initio level. We have developed the procedure allowing direct calculation of the self-energy for an extended contact. Within the method, it is possible to calculate the transmission coefficient through a contact of both finite and infinite length; the local density of states can be determined in both free and embedded CNT segments. We found perfect agreement with the experimental data for Pd and Al contacts. We have explained why CNTFETs with Pd electrodes are p -type FETs with ohmic contacts, which can carry current close to the ballistic limit (provided contact length is large enough), whereas in CNT-Al contacts transmission is suppressed to a significant extent, especially for holes.
Stresses and elastic constants of crystalline sodium, from molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiferl, S.K.
1985-02-01
The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Sixth-order wave aberration theory of ultrawide-angle optical systems.
Lu, Lijun; Cao, Yiqing
2017-10-20
In this paper, we develop sixth-order wave aberration theory of ultrawide-angle optical systems like fisheye lenses. Based on the concept and approach to develop wave aberration theory of plane-symmetric optical systems, we first derive the sixth-order intrinsic wave aberrations and the fifth-order ray aberrations; second, we present a method to calculate the pupil aberration of such kind of optical systems to develop the extrinsic aberrations; third, the relation of aperture-ray coordinates between adjacent optical surfaces is fitted with the second-order polynomial to improve the calculation accuracy of the wave aberrations of a fisheye lens with a large acceptance aperture. Finally, the resultant aberration expressions are applied to calculate the aberrations of two design examples of fisheye lenses; the calculation results are compared with the ray-tracing ones with Zemax software to validate the aberration expressions.
Theoretical Grounds for the Propagation of Uncertainties in Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Saracco, Paolo; Pia, Maria Grazia; Batic, Matej
2014-04-01
We introduce a theoretical framework for the calculation of uncertainties affecting observables produced by Monte Carlo particle transport, which derive from uncertainties in physical parameters input into simulation. The theoretical developments are complemented by a heuristic application, which illustrates the method of calculation in a streamlined simulation environment.
The application of computational chemistry to lignin
Thomas Elder; Laura Berstis; Nele Sophie Zwirchmayr; Gregg T. Beckham; Michael F. Crowley
2017-01-01
Computational chemical methods have become an important technique in the examination of the structure and reactivity of lignin. The calculations can be based either on classical or quantum mechanics, with concomitant differences in computational intensity and size restrictions. The current paper will concentrate on results developed from the latter type of calculations...
Subplane-based Control Rod Decusping Techniques for the 2D/1D Method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
2017-01-01
The MPACT transport code is being jointly developed by Oak Ridge National Laboratory and the University of Michigan to serve as the primary neutron transport code for the Virtual Environment for Reactor Applications Core Simulator. MPACT uses the 2D/1D method to solve the transport equation by decomposing the reactor model into a stack of 2D planes. A fine mesh flux distribution is calculated in each 2D plane using the Method of Characteristics (MOC), then the planes are coupled axially through a 1D NEM-Pmore » $$_3$$ calculation. This iterative calculation is then accelerated using the Coarse Mesh Finite Difference method. One problem that arises frequently when using the 2D/1D method is that of control rod cusping. This occurs when the tip of a control rod falls between the boundaries of an MOC plane, requiring that the rodded and unrodded regions be axially homogenized for the 2D MOC calculations. Performing a volume homogenization does not properly preserve the reaction rates, causing an error known as cusping. The most straightforward way of resolving this problem is by refining the axial mesh, but this can significantly increase the computational expense of the calculation. The other way of resolving the partially inserted rod is through the use of a decusping method. This paper presents new decusping methods implemented in MPACT that can dynamically correct the rod cusping behavior for a variety of problems.« less
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-05-07
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient's body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. In most cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy.
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-01-01
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient’s body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. Inmost cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy. PMID:20371908
Power flows and Mechanical Intensities in structural finite element analysis
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1989-01-01
The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.
Boiling process modelling peculiarities analysis of the vacuum boiler
NASA Astrophysics Data System (ADS)
Slobodina, E. N.; Mikhailov, A. G.
2017-06-01
The analysis of the low and medium powered boiler equipment development was carried out, boiler units possible development directions with the purpose of energy efficiency improvement were identified. Engineering studies for the vacuum boilers applying are represented. Vacuum boiler heat-exchange processes where boiling water is the working body are considered. Heat-exchange intensification method under boiling at the maximum heat- transfer coefficient is examined. As a result of the conducted calculation studies, heat-transfer coefficients variation curves depending on the pressure, calculated through the analytical and numerical methodologies were obtained. The conclusion about the possibility of numerical computing method application through RPI ANSYS CFX for the boiling process description in boiler vacuum volume was given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
NASA Astrophysics Data System (ADS)
Waghorn, Ben J.; Shah, Amish P.; Ngwa, Wilfred; Meeks, Sanford L.; Moore, Joseph A.; Siebers, Jeffrey V.; Langen, Katja M.
2010-07-01
Intra-fraction organ motion during intensity-modulated radiation therapy (IMRT) treatment can cause differences between the planned and the delivered dose distribution. To investigate the extent of these dosimetric changes, a computational model was developed and validated. The computational method allows for calculation of the rigid motion perturbed three-dimensional dose distribution in the CT volume and therefore a dose volume histogram-based assessment of the dosimetric impact of intra-fraction motion on a rigidly moving body. The method was developed and validated for both step-and-shoot IMRT and solid compensator IMRT treatment plans. For each segment (or beam), fluence maps were exported from the treatment planning system. Fluence maps were shifted according to the target position deduced from a motion track. These shifted, motion-encoded fluence maps were then re-imported into the treatment planning system and were used to calculate the motion-encoded dose distribution. To validate the accuracy of the motion-encoded dose distribution the treatment plan was delivered to a moving cylindrical phantom using a programmed four-dimensional motion phantom. Extended dose response (EDR-2) film was used to measure a planar dose distribution for comparison with the calculated motion-encoded distribution using a gamma index analysis (3% dose difference, 3 mm distance-to-agreement). A series of motion tracks incorporating both inter-beam step-function shifts and continuous sinusoidal motion were tested. The method was shown to accurately predict the film's dose distribution for all of the tested motion tracks, both for the step-and-shoot IMRT and compensator plans. The average gamma analysis pass rate for the measured dose distribution with respect to the calculated motion-encoded distribution was 98.3 ± 0.7%. For static delivery the average film-to-calculation pass rate was 98.7 ± 0.2%. In summary, a computational technique has been developed to calculate the dosimetric effect of intra-fraction motion. This technique has the potential to evaluate a given plan's sensitivity to anticipated organ motion. With knowledge of the organ's motion it can also be used as a tool to assess the impact of measured intra-fraction motion after dose delivery.
Sajjad, Madiha; Khan, Rehan Ahmed; Yasmeen, Rahila
2018-01-01
To develop a tool to evaluate faculty perceptions of assessment quality in an undergraduate medical program. The Assessment Implementation Measure (AIM) tool was developed by a mixed method approach. A preliminary questionnaire developed through literature review was submitted to a panel of 10 medical education experts for a three-round 'Modified Delphi technique'. Panel agreement of > 75% was considered the criterion for inclusion of items in the questionnaire. Cognitive pre-testing of five faculty members was conducted. Pilot study was done with 30 randomly selected faculty members. Content validity index (CVI) was calculated for individual items (I-CVI) and composite scale (S-CVI). Cronbach's alpha was calculated to determine the internal consistency reliability of the tool. The final AIM tool had 30 items after the Delphi process. S-CVI was 0.98 with the S-CVI/Avg method and 0.86 by S-CVI/UA method, suggesting good content validity. Cut-off value of < 0.9 I-CVI was taken as criterion for item deletion. Cognitive pre-testing revealed good item interpretation. Cronbach's alpha calculated for the AIM was 0.9, whereas Cronbach's alpha for the four domains ranged from 0.67 to 0.80. 'AIM' is a relevant and useful instrument with good content validity and reliability of results, and may be used to evaluate the teachers´ perceptions about assessment quality.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
Ab initio R-matrix calculations of e+-molecule scattering
NASA Technical Reports Server (NTRS)
Danby, Grahame; Tennyson, Jonathan
1990-01-01
The adaptation of the molecular R-matrix method, originally developed for electron-molecule collision studies, to positron scattering is discussed. Ab initio R-matrix calculations are presented for collisions of low energy positrons with a number of diatomic systems including H2, HF and N2. Differential elastic cross sections for positron-H2 show a minimum at about 45 deg for collision energies between 0.3 and 0.5 Ryd. The calculations predict a bound state of positronHF. Calculations on inelastic processes in N2 and O2 are also discussed.
El-Sayed, Adly H; Aly, A A; EI-Sayed, N I; Mekawy, M M; EI-Gendy, A A
2007-03-01
High quality heating device made of ferromagnetic alloy (thermal seed) was developed for hyperthermia treatment of cancer. The device generates sufficient heat at room temperature and stops heating at the Curie temperature T (c). The power dissipated from each seed was calculated from the area enclosed by the hysteresis loop. A new mathematical formula for the calculation of heating power was derived and showed good agreement with those calculated from hysteresis loop and calorimetric method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Wang, X; Li, H
Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less
An exploratory study of a finite difference method for calculating unsteady transonic potential flow
NASA Technical Reports Server (NTRS)
Bennett, R. M.; Bland, S. R.
1979-01-01
A method for calculating transonic flow over steady and oscillating airfoils was developed by Isogai. The full potential equation is solved with a semi-implicit, time-marching, finite difference technique. Steady flow solutions are obtained from time asymptotic solutions for a steady airfoil. Corresponding oscillatory solutions are obtained by initiating an oscillation and marching in time for several cycles until a converged periodic solution is achieved. The method is described in general terms and results for the case of an airfoil with an oscillating flap are presented for Mach numbers 0.500 and 0.875. Although satisfactory results are obtained for some reduced frequencies, it is found that the numerical technique generates spurious oscillations in the indicial response functions and in the variation of the aerodynamic coefficients with reduced frequency. These oscillations are examined with a dynamic data reduction method to evaluate their effects and trends with reduced frequency and Mach number. Further development of the numerical method is needed to eliminate these oscillations.
Ramlal, Patricia S.; Rudd, John W. M.; Hecky, Robert E.
1986-01-01
A method was developed to estimate specific rates of demethylation of methyl mercury in aquatic samples by measuring the volatile 14C end products of 14CH3HgI demethylation. This method was used in conjunction with a 203Hg2+ radiochemical method which determines specific rates of mercury methylation. Together, these methods enabled us to examine some factors controlling the net rate of mercury methylation. The methodologies were field tested, using lake sediment samples from a recently flooded reservoir in the Southern Indian Lake system which had developed a mercury contamination problem in fish. Ratios of the specific rates of methylation/demethylation were calculated. The highest ratios of methylation/demethylation were calculated. The highest ratios of methylation/demethylation occurred in the flooded shorelines of Southern Indian Lake. These results provide an explanation for the observed increases in the methyl mercury concentrations in fish after flooding. PMID:16346959
The AB Initio Mia Method: Theoretical Development and Practical Applications
NASA Astrophysics Data System (ADS)
Peeters, Anik
The bottleneck in conventional ab initio Hartree -Fock calculations is the storage of the electron repulsion integrals because their number increases with the fourth power of the number of basis functions. This problem can be solved by a combination of the multiplicative integral approximation (MIA) and the direct SCF method. The MIA approach was successfully applied in the geometry optimisation of some biologically interesting compounds like the neurolepticum Haloperidol and two TIBO derivatives, inactivators of HIV1. In this thesis the potency of the MIA-method is shown by the application of this method in the calculation of the forces on the nuclei. In addition, the MIA method enabled the development of a new model for performing crystal field studies: the supermolecule model. The results for this model are in better agreement with experimental data than the results for the point charge model. This is illustrated by the study of some small molecules in the solid state: 2,3-diketopiperazine, formamide oxime and two polymorphic forms of glycine, alpha-glycine and beta-glycine.
NASA Astrophysics Data System (ADS)
Yamamura, Hideho; Sato, Ryohei; Iwata, Yoshiharu
Global efforts toward energy conservation, increasing data centers, and the increasing use of IT equipments are leading to a demand in reduced power consumption of equipments, and power efficiency improvement of power supply units is becoming a necessity. MOSFETs are widely used for their low ON-resistances. Power efficiency is designed using time-domain circuit simulators, except for transformer copper-loss, which has frequency dependency which is calculated separately using methods based on skin and proximity effects. As semiconductor technology reduces the ON-resistance of MOSFETs, frequency dependency due to the skin effect or proximity effect is anticipated. In this study, ON-resistance of MOSFETs are measured and frequency dependency is confirmed. Power loss against rectangular current pulse is calculated. The calculation method for transformer copper-loss is expanded to MOSFETs. A frequency function for the resistance model is newly developed and parametric calculation is enabled. Acceleration of calculation is enabled by eliminating summation terms. Using this method, it is shown that the frequency dependent component of the measured MOSFETs increases the dissipation from 11% to 32% at a switching frequency of 100kHz. From above, this paper points out the importance of the frequency dependency of MOSFETs' ON-resistance, provides means of calculating its pulse losses, and improves loss calculation accuracy of SMPSs.
NASA Astrophysics Data System (ADS)
Kartashov, Dmitry; Shurshakov, Vyacheslav
2018-03-01
A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
NASA Astrophysics Data System (ADS)
Chou, Tien-Yin; Lin, Wen-Tzu; Lin, Chao-Yuan; Chou, Wen-Chieh; Huang, Pi-Hui
2004-02-01
With the fast growing progress of computer technologies, spatial information on watersheds such as flow direction, watershed boundaries and the drainage network can be automatically calculated or extracted from a digital elevation model (DEM). The stubborn problem that depressions exist in DEMs has been frequently encountered while extracting the spatial information of terrain. Several filling-up methods have been proposed for solving depressions. However, their suitability for large-scale flat areas is inadequate. This study proposes a depression watershed method coupled with the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEEs) theory to determine the optimal outlet and calculate the flow direction in depressions. Three processing procedures are used to derive the depressionless flow direction: (1) calculating the incipient flow direction; (2) establishing the depression watershed by tracing the upstream drainage area and determining the depression outlet using PROMETHEE theory; (3) calculating the depressionless flow direction. The developed method was used to delineate the Shihmen Reservoir watershed located in Northern Taiwan. The results show that the depression watershed method can effectively solve the shortcomings such as depression outlet differentiating and looped flow direction between depressions. The suitability of the proposed approach was verified.
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review
Miao, Yinglong; McCammon, J. Andrew
2016-01-01
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-01-01
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015
Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.
Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka
2016-12-28
In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head
NASA Astrophysics Data System (ADS)
Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.
2016-03-01
The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.
Atomistic calculations of dislocation core energy in aluminium
Zhou, X. W.; Sills, R. B.; Ward, D. K.; ...
2017-02-16
A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less
Atomistic calculations of dislocation core energy in aluminium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, X. W.; Sills, R. B.; Ward, D. K.
A robust molecular dynamics simulation method for calculating dislocation core energies has been developed. This method has unique advantages: it does not require artificial boundary conditions, is applicable for mixed dislocations, and can yield highly converged results regardless of the atomistic system size. Utilizing a high-fidelity bond order potential, we have applied this method in aluminium to calculate the dislocation core energy as a function of the angle β between the dislocation line and Burgers vector. These calculations show that, for the face-centred-cubic aluminium explored, the dislocation core energy follows the same functional dependence on β as the dislocation elasticmore » energy: Ec = A·sin 2β + B·cos 2β, and this dependence is independent of temperature between 100 and 300 K. By further analysing the energetics of an extended dislocation core, we elucidate the relationship between the core energy and radius of a perfect versus extended dislocation. With our methodology, the dislocation core energy can be accurately accounted for in models of plastic deformation.« less
Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.
Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T
2007-03-30
Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Krieger, J.B.; Norman, M.R.
1991-11-15
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it ismore » believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.« less
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Molecular dynamics calculation of rotational diffusion coefficient of a carbon nanotube in fluid.
Cao, Bing-Yang; Dong, Ruo-Yu
2014-01-21
Rotational diffusion processes are correlated with nanoparticle visualization and manipulation techniques, widely used in nanocomposites, nanofluids, bioscience, and so on. However, a systematical methodology of deriving this diffusivity is still lacking. In the current work, three molecular dynamics (MD) schemes, including equilibrium (Green-Kubo formula and Einstein relation) and nonequilibrium (Einstein-Smoluchowski relation) methods, are developed to calculate the rotational diffusion coefficient, taking a single rigid carbon nanotube in fluid argon as a case. We can conclude that the three methods produce same results on the basis of plenty of data with variation of the calculation parameters (tube length, diameter, fluid temperature, density, and viscosity), indicative of the validity and accuracy of the MD simulations. However, these results have a non-negligible deviation from the theoretical predictions of Tirado et al. [J. Chem. Phys. 81, 2047 (1984)], which may come from several unrevealed factors of the theory. The three MD methods proposed in this paper can also be applied to other situations of calculating rotational diffusion coefficient.
Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano
2018-04-11
Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.
Mechanisms of interfacial reactivity in near surface and extreme environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ying; Balaska, Eric; Weare, John
The local water structure surrounding ions in aqueous solutions greatly affects their chemical properties such as reaction rates, ion association, and proton and electron transport. These properties result in the behavior of ions in natural aqueous environments. For example ore transport is facilitated by chloride ion pair formation and the reaction of ions in an interface is strongly dependent on the dehydration of the ion hydration shell. We are developing the use of high-resolution XAFS observations and 1st principles based MD-XAFS analysis (spectra simulated using 1st principle methods with no adjustable parameters, AIMD) to interpret the solution properties of stronglymore » interacting aqueous solutes under arbitrary pressure and temperature conditions. In the 1st principle MD-XAFS method density functional theory (DFT) based MD simulations(Car and Parrinello, 1985) are used to generate a large ensemble of structural snap shots of the hydration region. These are then used to generate scattering intensities. I emphasize three points about this novel approach to analyzing XAFS data. 1st: As illustrated in Figure 1, the level of agreement between the calculated and observed intensities is considerably higher than has been obtained by any XAFS analysis to date (note 2nd shell region, R> 2 Å). 2nd: This result was obtained from a parameter free simulation with no fitting of the interaction potentials to any data. This supports the use of these methods for more difficult environments and more complex solutes (polyions). 3rd: New information about the shell structure (Figure 1) is now available because of this more detailed agreement. We note also that both multiple scattering and second shell features are well represented in the analysis. As far as we know this is the 1st analysis of second shell structure and multiple scattering. Excellent agreement has been obtained for most of the third row metal ions: Ca 2+, Zn 2+, Cu 2+, Ni 2+, Co 2+, Mn 2+, Fe 3+, Cr 3+. Calculations on these systems are demanding because of their open electronic shells, and high ionic charge. Principal Investigator: Professor John Weare (University of California, San Diego) The prediction of the interactions of geochemical fluids with minerals, nanoparticles, and colloids under extreme near surface conditions of temperature (T) and pressure (P) is a grand challenge research need in geosciences (U.S. DOE 2007, Basic Research Needs for Geosciences: Facilitating the 21st Energy Systems.). To evaluate the impact of these processes on energy production and management strategies it is necessary to have a high level of understanding of the interaction between complex natural fluids and mineral formations. This program emphasizes 1st principle parameter free simulations of complex chemical processes in solutions, in the mineral phase, and in the interfaces between these phases The development of new computational tools (with emphasis on oxide materials and reaction dynamics) tailored to treat wide range of conditions and time scales experienced in such geochemical applications is has been developed. Because of the sensitivity of the interaction in these systems to electronic structure and local bonding environments, and of the need to describe bond breaking/formation, our simulations are based on interactions calculated at the electronic structure level (ab-initio molecular dynamics, AIMD). The progress in the computational aspects of program may be summarized in terms of the following themes (objectives); Development of efficient parameter free dynamical simulation technology based on 1st principles force and energy calculations especially adapted for geochemical applications (e.g., mineral, interfaces and aqueous solutions) (continuing program); Calculation of the dynamics of water structure of in the surface-water interface of transition metal oxides and oxihydroxides; and Development of improved (beyond DFT+GGA) electronic structure calculations for minerals and the interface region that more accurately calculate electron correlation, spin density, and localization. The focus of the program is also on the iron oxide and oxihydroxide minerals and Fe 2+(aq)/Fe 3+(cr) oxidation in the mineral solution interface region. These methods included the development of model Hamilitonian methods that can be solved to near convergence for single site models (DMFT) and many-body perturbation methods (MP2, GW); Development of time decomposition methods to extend time scales of molecular dynamics (MD) simulations and support the use of high complexity electronic structure calculations (MP2, CCSD(T)) of forces for use in dynamical simulations where very high chemical accuracy is required (microsolvated reactions in absorbed surface layers); and The development of a new linear scaling finite element solver for eigenvalue problem that supports solution of quantum problems with unusual potential and boundary values. Application progress of the above new simulation technology to problems of geochemical interests includes; The prediction of metal oxide surface structure and the reduction/oxidation of Fe 3+(cr)/Fe 2+ (aq) in metal oxide (hematite, goethite)/solution interfaces. Result: water interacts strongly with the 001 Hematite surface; interaction of water with the 100 goethite is weak; The study of ion solvation and the composition of ion hydration shells under extreme conditions (focus on Fe 3+/2+, Al 3+ and Mg 2+ and their hydroxide speciation). Result: Ion association in water solutions can be calculated from 1st principle methods. Efficient sampling of the free energy requires more development; The continued development of new high resolution analysis of XAFS scattering of disordered systems (particularly Al, Mg) and of XANES calculations for aqueous ions. Result: EXAFS spectra can be calculated to high accuracy with DFT level dynamic simulations; The exploration of electron localization and electron transport in metal oxides (highly correlated materials). Result: Proper description of electron localization requires levels of calculation beyond DFT; and Localization of electrons in DFT type Hamiltonians was studied. Result: For very Dirac high exchange new solutions (New unphysical bifrucations) to the eigenvalue problem are found. The program was highly collaborative involving faculty and students in mathematics, physics and computer science departments as well as coworkers at the Pacific Northwest National Laboratories (PNNL). The students in this program had the opportunity to develop skills in the development of methods, the implementation of method on high performance parallel computers and the application of these methods to problem in geochemical science. Much of the software that was developed was incorporated in the NWchem software package maintained by PNNL.« less
Wang, L; Lovelock, M; Chui, C S
1999-12-01
To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.
Satellite Articulation Characterization from an Image Trajectory Matrix Using Optimization
NASA Astrophysics Data System (ADS)
Curtis, D. H.; Cobb, R. G.
Autonomous on-orbit satellite servicing and inspection benefits from an inspector satellite that can autonomously gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. This paper presents a method of characterizing the articulation of a satellite using resolved monocular imagery. A simulated point cloud representing a nominal satellite with articulating solar panels and a complex articulating appendage is developed and projected to the image coordinates that would be seen from an inspector following a given inspection route. A method is developed to analyze the resulting image trajectory matrix. The developed method takes advantage of the fact that the route of the inspector satellite is known to assist in the segmentation of the points into different rigid bodies, the creation of the 3D point cloud, and the identification of the articulation parameters. Once the point cloud and the articulation parameters are calculated, they can be compared to the known truth. The error in the calculated point cloud is determined as well as the difference between the true workspace of the satellite and the calculated workspace. These metrics can be used to compare the quality of various inspection routes for characterizing the satellite and its articulation.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1982-01-01
Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundberg, Kenneth Randall
1976-01-01
A method is developed to optimize the separated-pair independent particle (SPIP) wave function; it is a special case of the separated-pair theory obtained by using two-term natural expansions of the geminals. The orbitals are optimized by a theory based on the generalized Brillouin theorem and iterative configuration interaction (CI) calculations in the space of the SPIP function and its single excitations. The geminal expansion coefficients are optimized by serial 2 x 2 CI calculations. Formulas are derived for the matrix elements. An algorithm to implement the method is presented, and the work needed to evaluate the molecular integrals is discussed.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID.
Pasias, Ioannis N; Kiriakou, Ioannis K; Papakonstantinou, Lila; Proestos, Charalampos
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as "superfoods". The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID
Kiriakou, Ioannis K.; Papakonstantinou, Lila
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as “superfoods”. The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%. PMID:29301245
NASA Astrophysics Data System (ADS)
Takeda, Kotaro; Honda, Kentaro; Takeya, Tsutomu; Okazaki, Kota; Hiraki, Tatsurou; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Fukuda, Hiroshi; Usui, Mitsuo; Nosaka, Hideyuki; Yamamoto, Tsuyoshi; Yamada, Koji
2015-01-01
We developed a design technique for a photonics-electronics convergence system by using an equivalent circuit of optical devices in an electrical circuit simulator. We used the transfer matrix method to calculate the response of an optical device. This method used physical parameters and dimensions of optical devices as calculation parameters to design a device in the electrical circuit simulator. It also used an intermediate frequency to express the wavelength dependence of optical devices. By using both techniques, we simulated bit error rates and eye diagrams of optical and electrical integrated circuits and calculated influences of device structure change and wavelength shift penalty.
76 FR 77563 - Florida Power & Light Company; St. Lucie Plant, Unit No. 1; Exemption
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
....2, because the P-T limits developed for St. Lucie, Unit 1, use a finite element method to determine... Code for calculating K Im factors, and instead applies FEM [finite element modeling] methods for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzeja, R.; Werth, D.; Buckley, R.
The Atmospheric Technology Group at SRNL developed a new method to detect signals from Weapons of Mass Destruction (WMD) activities in a time series of chemical measurements at a downwind location. This method was tested with radioxenon measured in Russia and Japan after the 2013 underground test in North Korea. This LDRD calculated the uncertainty in the method with the measured data and also for a case with the signal reduced to 1/10 its measured value. The research showed that the uncertainty in the calculated probability of origin from the NK test site was small enough to confirm the test.more » The method was also wellbehaved for small signal strengths.« less
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
A method to calculate synthetic waveforms in stratified VTI media
NASA Astrophysics Data System (ADS)
Wang, W.; Wen, L.
2012-12-01
Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.
Ortiz, Marco G.
1993-01-01
A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.
Ortiz, M.G.
1993-06-08
A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.
Development of a neural network technique for KSTAR Thomson scattering diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seung Hun, E-mail: leesh81@nfri.re.kr; Lee, J. H.; Yamada, I.
Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ{sup 2} method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ{sup 2} method. The best results were obtained for 10{sup 3} training cyclesmore » and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ{sup 2} method and performs the calculation twenty times faster.« less
Compensation of the sheath effects in cylindrical floating probes
NASA Astrophysics Data System (ADS)
Park, Ji-Hwan; Chung, Chin-Wook
2018-05-01
In cylindrical floating probe measurements, the plasma density and electron temperature are overestimated due to sheath expansion and oscillation. To reduce these sheath effects, a compensation method based on well-developed floating sheath theories is proposed and applied to the floating harmonic method. The iterative calculation of the Allen-Boyd-Reynolds equation can derive the floating sheath thickness, which can be used to calculate the effective ion collection area; in this way, an accurate ion density is obtained. The Child-Langmuir law is used to calculate the ion harmonic currents caused by sheath oscillation of the alternating-voltage-biased probe tip. Accurate plasma parameters can be obtained by subtracting these ion harmonic currents from the total measured harmonic currents. Herein, the measurement principles and compensation method are discussed in detail and an experimental demonstration is presented.
Calculations of unsteady turbulent boundary layers with flow reversal
NASA Technical Reports Server (NTRS)
Nash, J. F.; Patel, V. C.
1975-01-01
The results are presented of a series of computational experiments aimed at studying the characteristics of time-dependent turbulent boundary layers with embedded reversed-flow regions. A calculation method developed earlier was extended to boundary layers with reversed flows for this purpose. The calculations were performed for an idealized family of external velocity distributions, and covered a range of degrees of unsteadiness. The results confirmed those of previous studies in demonstrating that the point of flow reversal is nonsingular in a time-dependent boundary layer. A singularity was observed to develop downstream of reversal, under certain conditions, accompanied by the breakdown of the boundary-layer approximations. A tentative hypothesis was advanced in an attempt to predict the appearance of the singularity, and is shown to be consistent with the calculated results.
NASA Astrophysics Data System (ADS)
Lin, Lin
The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.
First principles Peierls-Boltzmann phonon thermal transport: A topical review
Lindsay, Lucas
2016-08-05
The advent of coupled thermal transport calculations with interatomic forces derived from density functional theory has ushered in a new era of fundamental microscopic insight into lattice thermal conductivity. Subsequently, significant new understanding of phonon transport behavior has been developed with these methods, and because they are parameter free and successfully benchmarked against a variety of systems, they also provide reliable predictions of thermal transport in systems for which little is known. This topical review will describe the foundation from which first principles Peierls-Boltzmann transport equation methods have been developed, and briefly describe important necessary ingredients for accurate calculations. Samplemore » highlights of reported work will be presented to illustrate the capabilities and challenges of these techniques, and to demonstrate the suite of tools available, with an emphasis on thermal transport in micro- and nano-scale systems. In conclusion, future challenges and opportunities will be discussed, drawing attention to prospects for methods development and applications.« less
An automatic method to calculate heart rate from zebrafish larval cardiac videos.
Kang, Chia-Pin; Tu, Hung-Chi; Fu, Tzu-Fun; Wu, Jhe-Ming; Chu, Po-Hsun; Chang, Darby Tien-Hao
2018-05-09
Zebrafish is a widely used model organism for studying heart development and cardiac-related pathogenesis. With the ability of surviving without a functional circulation at larval stages, strong genetic similarity between zebrafish and mammals, prolific reproduction and optically transparent embryos, zebrafish is powerful in modeling mammalian cardiac physiology and pathology as well as in large-scale high throughput screening. However, an economical and convenient tool for rapid evaluation of fish cardiac function is still in need. There have been several image analysis methods to assess cardiac functions in zebrafish embryos/larvae, but they are still improvable to reduce manual intervention in the entire process. This work developed a fully automatic method to calculate heart rate, an important parameter to analyze cardiac function, from videos. It contains several filters to identify the heart region, to reduce video noise and to calculate heart rates. The proposed method was evaluated with 32 zebrafish larval cardiac videos that were recording at three-day post-fertilization. The heart rate measured by the proposed method was comparable to that determined by manual counting. The experimental results show that the proposed method does not lose accuracy while largely reducing the labor cost and uncertainty of manual counting. With the proposed method, researchers do not have to manually select a region of interest before analyzing videos. Moreover, filters designed to reduce video noise can alleviate background fluctuations during the video recording stage (e.g. shifting), which makes recorders generate usable videos easily and therefore reduce manual efforts while recording.
Andrews, D.J.
1985-01-01
A numerical boundary integral method, relating slip and traction on a plane in an elastic medium by convolution with a discretized Green function, can be linked to a slip-dependent friction law on the fault plane. Such a method is developed here in two-dimensional plane-strain geometry. Spontaneous plane-strain shear ruptures can make a transition from sub-Rayleigh to near-P propagation velocity. Results from the boundary integral method agree with earlier results from a finite difference method on the location of this transition in parameter space. The methods differ in their prediction of rupture velocity following the transition. The trailing edge of the cohesive zone propagates at the P-wave velocity after the transition in the boundary integral calculations. Refs.
Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle
2015-06-09
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.
2016-01-01
Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821
Computational Nanotechnology Program
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1997-01-01
The objectives are: (1) development of methodological and computational tool for the quantum chemistry study of carbon nanostructures and (2) development of the fundamental understanding of the bonding, reactivity, and electronic structure of carbon nanostructures. Our calculations have continued to play a central role in understanding the outcome of the carbon nanotube macroscopic production experiment. The calculations on buckyonions offer the resolution of a long controversy between experiment and theory. Our new tight binding method offers increased speed for realistic simulations of large carbon nanostructures.
NASA Technical Reports Server (NTRS)
Geissler, W.
1983-01-01
A finite difference method has been developed to calculate the unsteady boundary layer over an oscillating flat plate. Low- and high frequency approximations were used for comparison with numerical results. Special emphasis was placed on the behavior of the flow and on the numerical calculation procedure as soon as reversed flow has occurred over part of the oscillation cycle. The numerical method displayed neither problems nor singular behavior at the beginning of or within the reversed flow region. Calculations, however, came to a limit where the back-flow region reached the plate's leading edge in the case of high oscillation amplitudes. It is assumed that this limit is caused by the special behavior of the flow at the plate's leading edge where the boundary layer equations are not valid.
One-loop corrections to light cone wave functions: The dipole picture DIS cross section
NASA Astrophysics Data System (ADS)
Hänninen, H.; Lappi, T.; Paatelainen, R.
2018-06-01
We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.
Simplified method to solve sound transmission through structures lined with elastic porous material.
Lee, J H; Kim, J
2001-11-01
An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.
Thermal-hydraulic analysis capabilities and methods development at NYPA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1987-01-01
The operation of a nuclear power plant must be regularly supported by various thermal-hydraulic (T/H) analyses that may include final safety analysis report (FSAR) design basis calculations and licensing evaluations and conservative and best-estimate analyses. The development of in-house T/H capabilities provides the following advantages: (a) it leads to a better understanding of the plant design basis and operating characteristics; (b) methods developed can be used to optimize plant operations and enhance plant safety; (c) such a capability can be used for design reviews, checking vendor calculations, and evaluating proposed plant modifications; and (d) in-house capability reduces the cost ofmore » analysis. This paper gives an overview of the T/H capabilities and current methods development activity within the engineering department of the New York Power Authority (NYPA) and will focus specifically on reactor coolant system (RCS) transients and plant dynamic response for non-loss-of-coolant accident events. This paper describes NYPA experience in performing T/H analyses in support of pressurized water reactor plant operation.« less
Centrifugal pump’s impeller optimization using methods of calculation hydrodynamics
NASA Astrophysics Data System (ADS)
Grigoriev, S.; Mayorov, S.; Polyakov, R.
2017-08-01
The paper features the results of the fluid flow calculation in the channels of varying geometry of the centrifugal pump for the service water in the methanol production chain. Modeling of the flow in ANSYS CFX allowed developing recommendations on adjusting the impeller’s profile, significantly decrease the cavitation wear and increase the lifetime by several times.
Computation of the Genetic Code
NASA Astrophysics Data System (ADS)
Kozlov, Nicolay N.; Kozlova, Olga N.
2018-03-01
One of the problems in the development of mathematical theory of the genetic code (summary is presented in [1], the detailed -to [2]) is the problem of the calculation of the genetic code. Similar problems in the world is unknown and could be delivered only in the 21st century. One approach to solving this problem is devoted to this work. For the first time provides a detailed description of the method of calculation of the genetic code, the idea of which was first published earlier [3]), and the choice of one of the most important sets for the calculation was based on an article [4]. Such a set of amino acid corresponds to a complete set of representations of the plurality of overlapping triple gene belonging to the same DNA strand. A separate issue was the initial point, triggering an iterative search process all codes submitted by the initial data. Mathematical analysis has shown that the said set contains some ambiguities, which have been founded because of our proposed compressed representation of the set. As a result, the developed method of calculation was limited to the two main stages of research, where the first stage only the of the area were used in the calculations. The proposed approach will significantly reduce the amount of computations at each step in this complex discrete structure.
DOT National Transportation Integrated Search
2012-11-30
This report presents the results of the study to extend the useful attenuation range of the Approximate Method outlined in the American National Standard, Method for Calculation of the Absorption of Sound by the Atmosphere (ANSI S1.26-1995), an...
A method of estimating the knock rating of hydrocarbon fuel blend
NASA Technical Reports Server (NTRS)
Sanders, Newell D
1943-01-01
The usefulness of the knock ratings of pure hydrocarbon compounds would be increased if some reliable method of calculating the knock ratings of fuel blends was known. The purpose of this study was to investigate the possibility of developing a method of predicting the knock ratings of fuel blends.
A Method of Estimating the Knock Rating of Hydrocarbon Fuel Blends
NASA Technical Reports Server (NTRS)
Sanders, Newell D.
1943-01-01
The usefulness of the knock ratings of pure hydrocarbon compounds would be increased if some reliable method of calculating the knock ratings of fuel blends was known. The purpose of this study was to investigate the possibility of developing a method of predicting the knock ratings of fuel blends.
Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.
Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao
2013-09-10
Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
HyPEP FY06 Report: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE report
2006-09-01
The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less
Simplified procedure for computing the absorption of sound by the atmosphere
DOT National Transportation Integrated Search
2007-10-31
This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...
Head-and-neck IMRT treatments assessed with a Monte Carlo dose calculation engine.
Seco, J; Adams, E; Bidmead, M; Partridge, M; Verhaegen, F
2005-03-07
IMRT is frequently used in the head-and-neck region, which contains materials of widely differing densities (soft tissue, bone, air-cavities). Conventional methods of dose computation for these complex, inhomogeneous IMRT cases involve significant approximations. In the present work, a methodology for the development, commissioning and implementation of a Monte Carlo (MC) dose calculation engine for intensity modulated radiotherapy (MC-IMRT) is proposed which can be used by radiotherapy centres interested in developing MC-IMRT capabilities for research or clinical evaluations. The method proposes three levels for developing, commissioning and maintaining a MC-IMRT dose calculation engine: (a) development of a MC model of the linear accelerator, (b) validation of MC model for IMRT and (c) periodic quality assurance (QA) of the MC-IMRT system. The first step, level (a), in developing an MC-IMRT system is to build a model of the linac that correctly predicts standard open field measurements for percentage depth-dose and off-axis ratios. Validation of MC-IMRT, level (b), can be performed in a rando phantom and in a homogeneous water equivalent phantom. Ultimately, periodic quality assurance of the MC-IMRT system is needed to verify the MC-IMRT dose calculation system, level (c). Once the MC-IMRT dose calculation system is commissioned it can be applied to more complex clinical IMRT treatments. The MC-IMRT system implemented at the Royal Marsden Hospital was used for IMRT calculations for a patient undergoing treatment for primary disease with nodal involvement in the head-and-neck region (primary treated to 65 Gy and nodes to 54 Gy), while sparing the spinal cord, brain stem and parotid glands. Preliminary MC results predict a decrease of approximately 1-2 Gy in the median dose of both the primary tumour and nodal volumes (compared with both pencil beam and collapsed cone). This is possibly due to the large air-cavity (the larynx of the patient) situated in the centre of the primary PTV and the approximations present in the dose calculation.
Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.
Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H
2015-11-01
Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Configuration-constrained cranking Hartree-Fock pairing calculations for sidebands of nuclei
NASA Astrophysics Data System (ADS)
Liang, W. Y.; Jiao, C. F.; Wu, Q.; Fu, X. M.; Xu, F. R.
2015-12-01
Background: Nuclear collective rotations have been successfully described by the cranking Hartree-Fock-Bogoliubov (HFB) model. However, for rotational sidebands which are built on intrinsic excited configurations, it may not be easy to find converged cranking HFB solutions. The nonconservation of the particle number in the BCS pairing is another shortcoming. To improve the pairing treatment, a particle-number-conserving (PNC) pairing method was suggested. But the existing PNC calculations were performed within a phenomenological one-body potential (e.g., Nilsson or Woods-Saxon) in which one has to deal the double-counting problem. Purpose: The present work aims at an improved description of nuclear rotations, particularly for the rotations of excited configurations, i.e., sidebands. Methods: We developed a configuration-constrained cranking Skyrme Hartree-Fock (SHF) calculation with the pairing correlation treated by the PNC method. The PNC pairing takes the philosophy of the shell model which diagonalizes the Hamiltonian in a truncated model space. The cranked deformed SHF basis provides a small but efficient model space for the PNC diagonalization. Results: We have applied the present method to the calculations of collective rotations of hafnium isotopes for both ground-state bands and sidebands, reproducing well experimental observations. The first up-bendings observed in the yrast bands of the hafnium isotopes are reproduced, and the second up-bendings are predicted. Calculations for rotational bands built on broken-pair excited configurations agree well with experimental data. The band-mixing between two Kπ=6+ bands observed in 176Hf and the K purity of the 178Hf rotational state built on the famous 31 yr Kπ=16+ isomer are discussed. Conclusions: The developed configuration-constrained cranking calculation has been proved to be a powerful tool to describe both the yrast bands and sidebands of deformed nuclei. The analyses of rotational moments of inertia help to understand the structures of nuclei, including rotational alignments, configurations, and competitions between collective and single-particle excitations.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János
2016-04-01
Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Accurate prediction of bond dissociation energies of large n-alkanes using ONIOM-CCSD(T)/CBS methods
NASA Astrophysics Data System (ADS)
Wu, Junjun; Ning, Hongbo; Ma, Liuhao; Ren, Wei
2018-05-01
Accurate determination of the bond dissociation energies (BDEs) of large alkanes is desirable but practically impossible due to the expensive cost of high-level ab initio methods. We developed a two-layer ONIOM-CCSD(T)/CBS method which treats the high layer with CCSD(T) method and the low layer with DFT method, respectively. The accuracy of this method was validated by comparing the calculated BDEs of n-hexane with that obtained at the CCSD(T)-F12b/aug-cc-pVTZ level of theory. On this basis, the C-C BDEs of C6-C20 n-alkanes were calculated systematically using the ONIOM [CCSD(T)/CBS(D-T):M06-2x/6-311++G(d,p)] method, showing a good agreement with the data available in the literature.
NASA Astrophysics Data System (ADS)
Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.
2017-05-01
This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.
The Hartree-Fock calculation of the magnetic properties of molecular solutes
NASA Astrophysics Data System (ADS)
Cammi, R.
1998-08-01
In this paper we set the formal bases for the calculation of the magnetic susceptibility and of the nuclear magnetic shielding tensors for molecular solutes described within the framework of the polarizable continuum model (PCM). The theory has been developed at self-consistent field (SCF) level and adapted to be used within the framework of some of the computational procedures of larger use, i.e., the gauge invariant atomic orbital method (GIAO) and the continuous set gauge transformation method (CSGT). The numerical results relative to the magnetizabilities and chemical shielding of acetonitrile and nitrometane in various solvents computed with the PCM-CSGT method are also presented.
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-05-01
We have developed a simple method for solving the radiation transport equation, permitting us to rapidly calculate (with accuracy acceptable in practice) the diffuse reflection coeffi cient for a broad class of biological tissues in the spectral region of strong and weak absorption of light, and also the light flux distribution over the depth of the tissue. We show that it is feasible to use the proposed method for quantitative estimates of tissue parameters from its diffuse reflectance spectrum and also for selecting the irradiation dose which is optimal for a specifi c patient in laser therapy for various diseases.
NASA Astrophysics Data System (ADS)
Ding, Feizhi
Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear motion. All these developments and applications will open up new computational and theoretical tools to be applied to the development and understanding of chemical reactions, nonlinear optics, electromagnetism, and spintronics. Lastly, we present a new algorithm for large-scale MCSCF calculations that can utilize massively parallel machines while still maintaining optimal performance for each single processor. This will great improve the efficiency in the MCSCF calculations for studying chemical dissociation and high-accuracy quantum-mechanical simulations.
NASA Astrophysics Data System (ADS)
Afanasyev, A. P.; Bazhenov, R. I.; Luchaninov, D. V.
2018-05-01
The main purpose of the research is to develop techniques for defining the best technical and economic trajectories of cables in urban power systems. The proposed algorithms of calculation of the routes for laying cables take into consideration topological, technical and economic features of the cabling. The discrete option of an algorithm Fast marching method is applied as a calculating tool. It has certain advantages compared to other approaches. In particular, this algorithm is cost-effective to compute, therefore, it is not iterative. Trajectories of received laying cables are considered as optimal ones from the point of view of technical and economic criteria. They correspond to the present rules of modern urban development.
Clothing Protection from Ultraviolet Radiation: A New Method for Assessment.
Gage, Ryan; Leung, William; Stanley, James; Reeder, Anthony; Barr, Michelle; Chambers, Tim; Smith, Moira; Signal, Louise
2017-11-01
Clothing modifies ultraviolet radiation (UVR) exposure from the sun and has an impact on skin cancer risk and the endogenous synthesis of vitamin D. There is no standardized method available for assessing body surface area (BSA) covered by clothing, which limits generalizability between study findings. We calculated the body cover provided by 38 clothing items using diagrams of BSA, adjusting the values to account for differences in BSA by age. Diagrams displaying each clothing item were developed and incorporated into a coverage assessment procedure (CAP). Five assessors used the CAP and Lund & Browder chart, an existing method for estimating BSA, to calculate the clothing coverage of an image sample of 100 schoolchildren. Values of clothing coverage, inter-rater reliability and assessment time were compared between CAP and Lund & Browder methods. Both methods had excellent inter-rater reliability (>0.90) and returned comparable results, although the CAP method was significantly faster in determining a person's clothing coverage. On balance, the CAP method appears to be a feasible method for calculating clothing coverage. Its use could improve comparability between sun-safety studies and aid in quantifying the health effects of UVR exposure. © 2017 The American Society of Photobiology.
A multispectral imaging approach for diagnostics of skin pathologies
NASA Astrophysics Data System (ADS)
Lihacova, Ilze; Derjabo, Aleksandrs; Spigulis, Janis
2013-06-01
Noninvasive multispectral imaging method was applied for different skin pathology such as nevus, basal cell carcinoma, and melanoma diagnostics. Developed melanoma diagnostic parameter, using three spectral bands (540 nm, 650 nm and 950 nm), was calculated for nevus, melanoma and basal cell carcinoma. Simple multispectral diagnostic device was established and applied for skin assessment. Development and application of multispectral diagnostics method described further in this article.
Survey and Experimental Testing of Nongravimetric Mass Measurement Devices
NASA Technical Reports Server (NTRS)
Oakey, W. E.; Lorenz, R.
1977-01-01
Documentation presented describes the design, testing, and evaluation of an accelerated gravimetric balance, a low mass air bearing oscillator of the spring-mass type, and a centrifugal device for liquid mass measurement. A direct mass readout method was developed to replace the oscillation period readout method which required manual calculations to determine mass. A protoype 25 gram capacity micro mass measurement device was developed and tested.
Ghassemi, Rezwan; Brown, Robert; Narayanan, Sridar; Banwell, Brenda; Nakamura, Kunio; Arnold, Douglas L
2015-01-01
Intensity variation between magnetic resonance images (MRI) hinders comparison of tissue intensity distributions in multicenter MRI studies of brain diseases. The available intensity normalization techniques generally work well in healthy subjects but not in the presence of pathologies that affect tissue intensity. One such disease is multiple sclerosis (MS), which is associated with lesions that prominently affect white matter (WM). To develop a T1-weighted (T1w) image intensity normalization method that is independent of WM intensity, and to quantitatively evaluate its performance. We calculated median intensity of grey matter and intraconal orbital fat on T1w images. Using these two reference tissue intensities we calculated a linear normalization function and applied this to the T1w images to produce normalized T1w (NT1) images. We assessed performance of our normalization method for interscanner, interprotocol, and longitudinal normalization variability, and calculated the utility of the normalization method for lesion analyses in clinical trials. Statistical modeling showed marked decreases in T1w intensity differences after normalization (P < .0001). We developed a WM-independent T1w MRI normalization method and tested its performance. This method is suitable for longitudinal multicenter clinical studies for the assessment of the recovery or progression of disease affecting WM. Copyright © 2014 by the American Society of Neuroimaging.
Development of congestion performance measures using ITS information.
DOT National Transportation Integrated Search
2003-01-01
The objectives of this study were to define a performance measure(s) that could be used to show congestion levels on critical corridors throughout Virginia and to develop a method to select and calculate performance measures to quantify congestion in...
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less
Development of Methods for Diagnostics of Discharges in Supersonic Flows
2001-09-01
probe. As it was carried out in [I.21] the calculations of equilibrium structure of combustion products of hydrocarbonaceous fuel have shown, that at...fiber line for the required distance and the inverse transformation of the digit code to the analogue signal. New methods of plasma diagnostics are...plasma …. 137 2.3.1 Non-stationary kinetic model of a discharge in a dry air ………………………………………... 140 2.3.2 Results of numerical calculations of gas
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Howard, Brandon A; James, Olga G; Perkins, Jennifer M; Pagnanelli, Robert A; Borges-Neto, Salvador; Reiman, Robert E
2017-01-01
In thyroid cancer patients with renal impairment or other complicating factors, it is important to maximize I-131 therapy efficacy while minimizing bone marrow and lung damage. We developed a web-based calculator based on a modified Benua and Leeper method to calculate the maximum I-131 dose to reduce the risk of these toxicities, based on the effective renal clearance of I-123 as measured from two whole-body I-123 scans, performed at 0 and 24 h post-administration.
2007-06-01
2.2.4 A QUALITATIVE VIEW OF OC CYCLING 44 2.2.5 COUPLED ISOTOPE MASS BALANCE CALCULATIONS 47 2.3 CONCLUSIONS 56 ACKNOWLEDGEMENTS 57 REFERENCES 58...METHODS 71 3.2 RESULTS & DISCUSSION 73 3.2.1 CHRONOLOGY DEVELOPMENT 73 3.2.2 ELEMENTAL AND ISOTOPIC PROFILES 77 3.2.3 MASS BALANCE CALCULATIONS 80 3.3...2005). Within this framework, isotopic mass balance calculations used to assess the fractional abundance of modem and ancient OC (Blair et al., 2003
Recent developments in multidimensional transport methods for the APOLLO 2 lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmijarevic, I.; Sanchez, R.
1995-12-31
A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less
NASA Astrophysics Data System (ADS)
Beecken, B. P.; Fossum, E. R.
1996-07-01
Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.
Pretest Predictions for Phase II Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiming Sun
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, and concrete pipe walls that will be developed during the Phase II ventilation tests involving various test conditions. The results will be used as inputs to validating numerical approach for modeling continuous ventilation, and be used to support the repository subsurface design. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the Phase II ventilation tests, and describe numerical methods that are used to calculate the effects of continuous ventilation. The calculation is limitedmore » to thermal effect only. This engineering work activity is conducted in accordance with the ''Technical Work Plan for: Subsurface Performance Testing for License Application (LA) for Fiscal Year 2001'' (CRWMS M&O 2000d). This technical work plan (TWP) includes an AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', activity evaluation (CRWMS M&O 2000d, Addendum A) that has determined this activity is subject to the YMP quality assurance (QA) program. The calculation is developed in accordance with the AP-3.12Q procedure, ''Calculations''. Additional background information regarding this activity is contained in the ''Development Plan for Ventilation Pretest Predictive Calculation'' (DP) (CRWMS M&O 2000a).« less
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
NASA Technical Reports Server (NTRS)
Lehoczky, S. L.; Szofran, F. R.; Martin, B. G.
1980-01-01
Mercury cadmium telluride crystals were prepared by the Bridgman method with a wide range of crystal growth rates and temperature gradients adequate to prevent constitutional supercooling under diffusion-limited, steady state, growth conditions. The longitudinal compositional gradients for different growth conditions and alloy compositions were calculated and compared with experimental data to develop a quantitative model of the crystal growth kinetics for the Hg(i-x)CdxTe alloys, and measurements were performed to ascertain the effect of growth conditions on radial compositional gradients. The pseudobinary HgTe-CdTe constitutional phase diagram was determined by precision differential thermal analysis measurements and used to calculate the segregation coefficient of Cd as a function of x and interface temperature. Computer algorithms specific to Hg(1-x)CdxTe were developed for calculations of the charge carrier concentrations, charge carrier mobilities, Hall coefficient, optical absorptance, and Fermi energy as functions of x, temperature, ionized donor and acceptor concentrations, and neutral defect concentrations.
Transient-Free Operations With Physics-Based Real-time Analysis and Control
NASA Astrophysics Data System (ADS)
Kolemen, Egemen; Burrell, Keith; Eggert, William; Eldon, David; Ferron, John; Glasser, Alex; Humphreys, David
2016-10-01
In order to understand and predict disruptions, the two most common methods currently employed in tokamak analysis are the time-consuming ``kinetic EFITs,'' which are done offline with significant human involvement, and the search for correlations with global precursors using various parameterization techniques. We are developing automated ``kinetic EFITs'' at DIII-D to enable calculation of the stability as the plasma evolves close to the disruption. This allows us to quantify the probabilistic nature of the stability calculations and provides a stability metric for all possible linear perturbations to the plasma. This study also provides insight into how the control system can avoid the unstable operating space, which is critical for high-performance operations close to stability thresholds at ITER. A novel, efficient ideal stability calculation method and new real-time CER acquisition system are being developed, and a new 77-core server has been installed on the DIII-D PCS to enable experimental use. Sponsored by US DOE under DE-SC0015878 and DE-FC02-04ER54698.
Development of Water Softening Method of Intake in Magnitogorsk
NASA Astrophysics Data System (ADS)
Meshcherova, E. A.; Novoselova, J. N.; Moreva, J. A.
2017-11-01
This article contains an appraisal of the drinking water quality of Magnitogorsk intake. A water analysis was made which led to the conclusion that the standard for general water hardness was exceeded. As a result, it became necessary to develop a number of measures to reduce water hardness. To solve this problem all the necessary studies of the factors affecting the value of increased water hardness were carried out and the water softening method by using an ion exchange filter was proposed. The calculation of the cation-exchanger filling volume of the proposed filter is given in the article, its overall dimensions are chosen. The obtained calculations were confirmed by the results of laboratory studies by using the test installation. The research and laboratory tests results make the authors conclude that the proposed method should be used to obtain softened water for the requirements of SanPin.
NASA Astrophysics Data System (ADS)
Zammit, Mark C.; Fursa, Dmitry V.; Savage, Jeremy S.; Bray, Igor
2017-06-01
Starting from first principles, this tutorial describes the development of the adiabatic-nuclei convergent close-coupling (CCC) method and its application to electron and (single-centre) positron scattering from diatomic molecules. We give full details of the single-centre expansion CCC method, namely the formulation of the molecular target structure; solving the momentum-space coupled-channel Lippmann-Schwinger equation; deriving adiabatic-nuclei cross sections and calculating V-matrix elements. Selected results are presented for electron and positron scattering from molecular hydrogen H2 and electron scattering from the vibrationally excited molecular hydrogen ion {{{H}}}2+ and its isotopologues (D2 +, {{{T}}}2+, HD+, HT+ and TD+). Convergence in both the close-coupling (target state) and projectile partial-wave expansions of fixed-nuclei electron- and positron-molecule scattering calculations is demonstrated over a broad energy-range and discussed in detail. In general, the CCC results are in good agreement with experiments.
Enhanced calculation of eigen-stress field and elastic energy in atomistic interdiffusion of alloys
NASA Astrophysics Data System (ADS)
Cecilia, José M.; Hernández-Díaz, A. M.; Castrillo, Pedro; Jiménez-Alonso, J. F.
2017-02-01
The structural evolution of alloys is affected by the elastic energy associated to eigen-stress fields. However, efficient calculations of the elastic energy in evolving geometries are actually a great challenge in promising atomistic simulation techniques such as Kinetic Monte Carlo (KMC) methods. In this paper, we report two complementary algorithms to calculate the eigen-stress field by linear superposition (a.k.a. LSA, Lineal Superposition Algorithm) and the elastic energy modification in atomistic interdiffusion of alloys (the Atom Exchange Elastic Energy Evaluation (AE4) Algorithm). LSA is shown to be appropriated for fast incremental stress calculation in highly nanostructured materials, whereas AE4 provides the required input for KMC and, additionally, it can be used to evaluate the accuracy of the eigen-stress field calculated by LSA. Consequently, they are suitable to be used on-the-fly with KMC. Both algorithms are massively parallel by their definition and thus well-suited for their parallelization on modern Graphics Processing Units (GPUs). Our computational studies confirm that we can obtain significant improvements compared to conventional Finite Element Methods, and the utilization of GPUs opens up new possibilities for the development of these methods in atomistic simulation of materials.
NASA Astrophysics Data System (ADS)
Yulkifli; Afandi, Zurian; Yohandri
2018-04-01
Development of gravitation acceleration measurement using simple harmonic motion pendulum method, digital technology and photogate sensor has been done. Digital technology is more practical and optimizes the time of experimentation. The pendulum method is a method of calculating the acceleration of gravity using a solid ball that connected to a rope attached to a stative pole. The pendulum is swung at a small angle resulted a simple harmonic motion. The measurement system consists of a power supply, Photogate sensors, Arduino pro mini and seven segments. The Arduino pro mini receives digital data from the photogate sensor and processes the digital data into the timing data of the pendulum oscillation. The calculation result of the pendulum oscillation time is displayed on seven segments. Based on measured data, the accuracy and precision of the experiment system are 98.76% and 99.81%, respectively. Based on experiment data, the system can be operated in physics experiment especially in determination of the gravity acceleration.
Heat Exchange in “Human body - Thermal protection - Environment” System
NASA Astrophysics Data System (ADS)
Khromova, I. V.
2017-11-01
This article is devoted to the issues of simulation and calculation of thermal processes in the system called “Human body - Thermal protection - Environment” under low temperature conditions. It considers internal heat sources and convective heat transfer between calculated elements. Overall this is important for the Heat Transfer Theory. The article introduces complex heat transfer calculation method and local thermophysical parameters calculation method in the system called «Human body - Thermal protection - Environment», considering passive and active thermal protections, thermophysical and geometric properties of calculated elements in a wide range of environmental parameters (water, air). It also includes research on the influence that thermal resistance of modern materials, used in special protective clothes development, has on heat transfer in the system “Human body - Thermal protection - Environment”. Analysis of the obtained results allows adding of the computer research data to experiments and optimizing of individual life-support system elements, which are intended to protect human body from exposure to external factors.
Cardiovascular risk assessment in rheumatoid arthritis – controversies and the new approach
Głuszko, Piotr
2016-01-01
The current methods of cardiovascular (CV) risk assessment in the course of inflammatory connective tissue diseases are a subject of considerable controversy. Comparing different methods of CV risk assessment in current rheumatoid arthritis (RA) guidelines, only a few of them recommend the use of formal risk calculators. These are the EULAR guidelines suggesting the use of SCORE and the British Society for Rheumatology guidelines performed in collaboration with NICE preferring the use of QRISK-2. Analyzing the latest American and British reports, two main concepts could be identified. The first one is to focus on risk calculators developed for the general population taking into account RA, and the calculator that might fulfill this role is the new QRISK-2 presented by NICE in 2014. The second concept is to create RA-specific risk calculators, such as the Expanded Cardiovascular Risk Prediction Score for RA. In this review we also discuss the efficiency of a new Pooled Cohort Equation and other calculators in the general and RA population. PMID:27504023
Enzymatic Kinetic Isotope Effects from First-Principles Path Sampling Calculations.
Varga, Matthew J; Schwartz, Steven D
2016-04-12
In this study, we develop and test a method to determine the rate of particle transfer and kinetic isotope effects in enzymatic reactions, specifically yeast alcohol dehydrogenase (YADH), from first-principles. Transition path sampling (TPS) and normal mode centroid dynamics (CMD) are used to simulate these enzymatic reactions without knowledge of their reaction coordinates and with the inclusion of quantum effects, such as zero-point energy and tunneling, on the transferring particle. Though previous studies have used TPS to calculate reaction rate constants in various model and real systems, it has not been applied to a system as large as YADH. The calculated primary H/D kinetic isotope effect agrees with previously reported experimental results, within experimental error. The kinetic isotope effects calculated with this method correspond to the kinetic isotope effect of the transfer event itself. The results reported here show that the kinetic isotope effects calculated from first-principles, purely for barrier passage, can be used to predict experimental kinetic isotope effects in enzymatic systems.
Comparison Of Reaction Barriers In Energy And Free Energy For Enzyme Catalysis
NASA Astrophysics Data System (ADS)
Andrés Cisneros, G.; Yang, Weitao
Reaction paths on potential energy surfaces obtained from QM/MM calculations of enzymatic or solution reactions depend on the starting structure employed for the path calculations. The free energies associated with these paths should be more reliable for studying reaction mechanisms, because statistical averages are used. To investigate this, the role of enzyme environment fluctuations on reaction paths has been studied with an ab initio QM/MM method for the first step of the reaction catalyzed by 4-oxalocrotonate tautomerase (4OT). Four minimum energy paths (MEPs) are compared, which have been determined with two different methods. The first path (path A) has been determined with a procedure that combines the nudged elastic band (NEB) method and a second order parallel path optimizer recently developed in our group. The second path (path B) has also been determined by the combined procedure, however, the enzyme environment has been relaxed by molecular dynamics (MD) simulations. The third path (path C) has been determined with the coordinate driving (CD) method, using the enzyme environment from path B. We compare these three paths to a previously determined path (path D) determined with the CD method. In all four cases the QM/MM-FE method (Y. Zhang et al., JCP, 112, 3483) was employed to obtain the free energy barriers for all four paths. In the case of the combined procedure, the reaction path is approximated by a small number of images which are optimized to the MEP in parallel, which results in a reduced computational cost. However, this does not allow the FEP calculation on the MEP. In order to perform FEP calculations on these paths, we introduce a modification to the NEB method that enables the addition of as many extra images to the path as needed for the FEP calculations. The calculated potential energy barriers show differences in the activation barrier between the calculated paths of as much as 5.17 kcal/mol. However, the largest free energy barrier difference is 1.58 kcal/mol. These results show the importance of the inclusion of the environment fluctuation in the calculation of enzymatic activation barriers
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
A vortex wake capturing method for potential flow calculations
NASA Technical Reports Server (NTRS)
Murman, E. M.; Stremel, P. M.
1982-01-01
A method is presented for modifying finite difference solutions of the potential equation to include the calculation of non-planar vortex wake features. The approach is an adaptation of Baker's 'cloud in cell' algorithm developed for the stream function-vorticity equations. The vortex wake is tracked in a Lagrangian frame of reference as a group of discrete vortex filaments. These are distributed to the Eulerian mesh system on which the velocity is calculated by a finite difference solution of the potential equation. An artificial viscosity introduced by the finite difference equations removes the singular nature of the vortex filaments. Computed examples are given for the two-dimensional time dependent roll-up of vortex wakes generated by wings with different spanwise loading distributions.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, J.
The report identifies, compares, and evaluates the major methods developed in Europe and North America to predict noise levels resulting from urban-development projects. Hopefully, it will guide countries that have not yet developed their own noise-prediction models to choose the model most appropriate for their particular situation. It covers prediction methods for road traffic noise and railroad traffic noise in Austria, Czecheslovakia, France, both Germany's Hungary, Netherlands, Scandinavia, Switzerland, U.K. and USA, as well as the Commission of the European Communities, and a comparison of methods. It also covers prediction methods for industrial noise from Austria, both Germany's Netherlands, Scandinavia,more » and U.K., and discusses calculation methods for aircraft noise around airports.« less
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Theoretical study of hull-rotor aerodynamic interference on semibuoyant vehicles
NASA Technical Reports Server (NTRS)
Spangler, S. B.; Smith, C. A.
1978-01-01
Analytical methods are developed to predict the pressure distribution and overall loads on the hulls of airships which have close coupled, relatively large and/or high disk loading propulsors for attitude control, station keeping, and partial support of total weight as well as provision of thrust in cruise. The methods comprise a surface-singularity, potential-flow model for the hull and lifting surfaces (such as tails) and a rotor model which calculates the velocity induced by the rotor and its wake at points adjacent to the wake. Use of these two models provides an inviscid pressure distribution on the hull with rotor interference. A boundary layer separation prediction method is used to locate separation on the hull, and a wake pressure is imposed on the separated region for purposes of calculating hull loads. Results of calculations are shown to illustrate various cases of rotor-hull interference and comparisons with small scale data are made to evaluate the method.
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
NASA Astrophysics Data System (ADS)
Wolff, Andrzej
2010-01-01
The temperature of a brake friction surface influences significantly the braking effectiveness. The paper describes a heat transfer process in car brakes. Using a developed program of finite element method, the temperature distributions in brake rotors (disc and drum brake) of a light truck have been calculated. As a preliminary consistency criterion of the brake thermal state in road and roll-stand braking conditions, a balance of the energy cumulated in the brake rotor has been taken into account. As the most reliable consistency criterion an equality of average temperatures of the friction surface has been assumed. The presented method allows to achieve on a roll-stand the analogical thermal states of automotive brakes, which are observed during braking in road conditions. Basing on this method, it is possible to calculate the braking time and force for a high-speed roll-stand. In contrast to the previous papers of the author, new calculation results have been presented.
Musil, Karel; Florianova, Veronika; Bucek, Pavel; Dohnal, Vlastimil; Kuca, Kamil; Musilek, Kamil
2016-01-05
Acetylcholinesterase reactivators (oximes) are compounds used for antidotal treatment in case of organophosphorus poisoning. The dissociation constants (pK(a1)) of ten standard or promising acetylcholinesterase reactivators were determined by ultraviolet absorption spectrometry. Two methods of spectra measurement (UV-vis spectrometry, FIA/UV-vis) were applied and compared. The soft and hard models for calculation of pK(a1) values were performed. The pK(a1) values were recommended in the range 7.00-8.35, where at least 10% of oximate anion is available for organophosphate reactivation. All tested oximes were found to have pK(a1) in this range. The FIA/UV-vis method provided rapid sample throughput, low sample consumption, high sensitivity and precision compared to standard UV-vis method. The hard calculation model was proposed as more accurate for pK(a1) calculation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac
2018-03-01
The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
Numerical noise prediction in fluid machinery
NASA Astrophysics Data System (ADS)
Pantle, Iris; Magagnato, Franco; Gabi, Martin
2005-09-01
Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.
NASA Astrophysics Data System (ADS)
Bykov, Andrei M.; Toptygin, Igor'N.
1993-11-01
This review presents methods available for calculating transport coefficients for impurity particles in plasmas with strong long-wave MHD-type velocity and magnetic-field fluctuations, and random ensembles of strong shock fronts. The renormalization of the coefficients of the mean-field equation of turbulent dynamo theory is also considered. Particular attention is devoted to the renormalization method developed by the authors in which the renormalized transport coefficients are calculated from a nonlinear transcendental equation (or a set of such equations) and are expressed in the form of explicit functions of pair correlation tensors describing turbulence. Numerical calculations are reproduced for different turbulence spectra. Spatial transport in a magnetic field and particle acceleration by strong turbulence are investigated. The theory can be used in a wide range of practical problems in plasma physics, atmospheric physics, ocean physics, astrophysics, cosmic-ray physics, and so on.
NASA Astrophysics Data System (ADS)
Yannopapas, Vassilios; Paspalakis, Emmanuel
2018-07-01
We present a new theoretical tool for simulating optical trapping of nanoparticles in the presence of an arbitrary metamaterial design. The method is based on rigorously solving Maxwell's equations for the metamaterial via a hybrid discrete-dipole approximation/multiple-scattering technique and direct calculation of the optical force exerted on the nanoparticle by means of the Maxwell stress tensor. We apply the method to the case of a spherical polystyrene probe trapped within the optical landscape created by illuminating of a plasmonic metamaterial consisting of periodically arranged tapered metallic nanopyramids. The developed technique is ideally suited for general optomechanical calculations involving metamaterial designs and can compete with purely numerical methods such as finite-difference or finite-element schemes.
First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals
NASA Astrophysics Data System (ADS)
Tadano, Terumasa; Tsuneyuki, Shinji
2018-04-01
We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.
Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set
NASA Astrophysics Data System (ADS)
Bennie, Simon J.; Stella, Martina; Miller, Thomas F.; Manby, Frederick R.
2015-07-01
Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O ( N 0 ) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.
Pretest Predictions for Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Sun; H. Yang; H.N. Kalia
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less
Error rate of automated calculation for wound surface area using a digital photography.
Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H
2018-02-01
Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
O'Rourke, John; Main, Susan; Hill, Susan M.
2017-01-01
In this paper we report on a study of the implementation of handheld game consoles (HGCs) in 10 Year four/five classrooms to develop student automaticity of mathematical calculations. The automaticity of mathematical calculations was compared for those students using the HGC and those being taught using traditional teaching methods. Over a school…
NASA Astrophysics Data System (ADS)
Aigyl Ilshatovna, Sabirova; Svetlana Fanilevna, Khasanova; Vildanovna, Nagumanova Regina
2018-05-01
On the basis of decision making theory (minimax and maximin approaches) the authors propose a technique with the results of calculations of the critical values of effectiveness indicators of agricultural producers in the Republic of Tatarstan for 2013-2015. There is justified necessity of monitoring the effectiveness of the state support and the direction of its improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Özdemir, Semra Bayat; Demiralp, Metin
The determination of the energy states is highly studied issue in the quantum mechanics. Based on expectation values dynamics, energy states can be observed. But conditions and calculations vary depending on the created system. In this work, a symmetric exponential anharmonic oscillator is considered and development of a recursive approximation method is studied to find its ground energy state. The use of majorant values facilitates the approximate calculation of expectation values.
Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach
NASA Astrophysics Data System (ADS)
Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan
2017-11-01
The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.
Validation of cardiac accelerometer sensor measurements.
Remme, Espen W; Hoff, Lars; Halvorsen, Per Steinar; Naerum, Edvard; Skulstad, Helge; Fleischer, Lars A; Elle, Ole Jakob; Fosse, Erik
2009-12-01
In this study we have investigated the accuracy of an accelerometer sensor designed for the measurement of cardiac motion and automatic detection of motion abnormalities caused by myocardial ischaemia. The accelerometer, attached to the left ventricular wall, changed its orientation relative to the direction of gravity during the cardiac cycle. This caused a varying gravity component in the measured acceleration signal that introduced an error in the calculation of myocardial motion. Circumferential displacement, velocity and rotation of the left ventricular apical region were calculated from the measured acceleration signal. We developed a mathematical method to separate translational and gravitational acceleration components based on a priori assumptions of myocardial motion. The accuracy of the measured motion was investigated by comparison with known motion of a robot arm programmed to move like the heart wall. The accuracy was also investigated in an animal study. The sensor measurements were compared with simultaneously recorded motion from a robot arm attached next to the sensor on the heart and with measured motion by echocardiography and a video camera. The developed compensation method for the varying gravity component improved the accuracy of the calculated velocity and displacement traces, giving very good agreement with the reference methods.
Stability and ionic mobility in argyrodite-related lithium-ion solid electrolytes.
Chen, Hao Min; Maohua, Chen; Adams, Stefan
2015-07-07
In the search for fast lithium-ion conducting solids for the development of safe rechargeable all-solid-state batteries with high energy density, thiophosphates and related compounds have been demonstrated to be particularly promising both because of their record ionic conductivities and their typically low charge transfer resistances. In this work we explore a wide range of known and predicted thiophosphates with a particular focus on the cubic argyrodite phase with a robust three-dimensional network of ion migration pathways. Structural and hydrolysis stability are calculated employing density functional method in combination with a generally applicable method of predicting the relevant critical reaction. The activation energy for ion migration in these argyrodites is then calculated using the empirical bond valence pathway method developed in our group, while bandgaps of selected argyrodites are calculated as a basis for assessing the electrochemical window. Findings for the lithium compounds are also compared to those of previously known copper argyrodites and hypothetical sodium argyrodites. Therefrom, guidelines for experimental work are derived to yield phases with the optimum balance between chemical stability and ionic conductivity in the search for practical lithium and sodium solid electrolyte materials.
NASA Astrophysics Data System (ADS)
Bai, Jianhui; Wang, Gengchen
2003-09-01
On the basis of analyzing observational data on solar radiation, meteorological parameters, and total ozone amount for the period of January 1990 to December 1991 in the Beijing area, an empirical calculation method for ultraviolet radiation (UV) in clear sky is obtained. The results show that the calculated values agree well with the observed, with maximum relative bias of 6.2% and mean relative bias for 24 months of 1.9%. Good results are also obtained when this method is applied in Guangzhou and Mohe districts. The long-term variation of UV radiation in clear sky over the Beijing area from 1979 to 1998 is calculated, and the UV variation trends and causes are discussed: direct and indirect UV energy absorption by increasing pollutants in the troposphere may have caused the UV decrease in clear sky in the last 20 years. With the enhancement of people’s quality of life and awareness of health, it will be valuable and practical to provid UV forecasts for typical cities and rural areas. So, we should develop and enhance UV study in systematic monitoring, forecasting, and developing a good and feasible method for UV radiation reporting in China, especially for big cities.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-08-01
In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-08-28
In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.
Effect of Reynolds number and turbulence on airfoil aerodynamics at -90-degree incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1994-01-01
A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered-grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the noniterative solution of the flowfield and satisfies the continuity equation to machine zero at each time step. The method is evaluated in terms of its stability to predict two-dimensional flow about an airfoil at -90-deg incidence for varying Reynolds number and laminar/turbulent models. The variations of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisom indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.
A VaR Algorithm for Warrants Portfolio
NASA Astrophysics Data System (ADS)
Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong
Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.
Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.
Geometrical optics approach in liquid crystal films with three-dimensional director variations.
Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W
2003-04-01
A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.
40 CFR 66.21 - How to calculate the penalty.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...
40 CFR 66.21 - How to calculate the penalty.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...
40 CFR 66.21 - How to calculate the penalty.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...
40 CFR 66.21 - How to calculate the penalty.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...
40 CFR 66.21 - How to calculate the penalty.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...
Bond additivity corrections for quantum chemistry methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chechenin, N. G., E-mail: chechenin@sinp.msu.ru; Chuvilskaya, T. V.; Shirokova, A. A.
2015-10-15
As a continuation and a development of previous studies of our group that were devoted to the investigation of nuclear reactions induced by protons of moderately high energy (between 10 and 400 MeV) in silicon, aluminum, and tungsten atoms, the results obtained by exploring nuclear reactions on atoms of copper, which is among the most important components in materials for contact pads and pathways in modern and future ultralarge-scale integration circuits, especially in three-dimensional topology, are reported in the present article. The nuclear reactions in question lead to the formation of the mass and charge spectra of recoil nuclei rangingmore » fromheavy target nuclei down to helium and hydrogen. The kineticenergy spectra of reaction products are calculated. The results of the calculations based on the procedure developed by our group are compared with the results of calculations and experiments performed by other authors.« less
Use of Displacement Damage Dose in an Engineering Model of GaAs Solar Cell Radiation Damage
NASA Technical Reports Server (NTRS)
Morton, T. L.; Chock, R.; Long, K. J.; Bailey, S.; Messenger, S. R.; Walters, R. J.; Summers, G. P.
2005-01-01
Current methods for calculating damage to solar cells are well documented in the GaAs Solar Cell Radiation Handbook (JPL 96-9). An alternative, the displacement damage dose (D(sub d)) method, has been developed by Summers, et al. This method is currently being implemented in the SAVANT computer program.
Verification of Internal Dose Calculations.
NASA Astrophysics Data System (ADS)
Aissi, Abdelmadjid
The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous phantoms, such as the MIRD phantom and its physical representation, Mr. ADAM. The results indicated that the Reciprocity Theorem is valid within an average range of uncertainty of 8%.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Pinella, David; Garrison, Peter
1999-01-01
Collection efficiency and ice accretion calculations were made for a commercial transport using the NASA Lewis LEWICE3D ice accretion code, the ICEGRID3D grid code and the CMARC panel code. All of the calculations were made on a Windows 95 based personal computer. The ice accretion calculations were made for the nose, wing, horizontal tail and vertical tail surfaces. Ice shapes typifying those of a 30 minute hold were generated. Collection efficiencies were also generated for the entire aircraft using the newly developed unstructured collection efficiency method. The calculations highlight the flexibility and cost effectiveness of the LEWICE3D, ICEGRID3D, CMARC combination.
Proposed software system for atomic-structure calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, C.F.
1981-07-01
Atomic structure calculations are understood well enough that, at a routine level, an atomic structure software package can be developed. At the Atomic Physics Conference in Riga, 1978 L.V. Chernysheva and M.Y. Amusia of Leningrad University, presented a paper on Software for Atomic Calculations. Their system, called ATOM is based on the Hartree-Fock approximation and correlation is included within the framework of RPAE. Energy level calculations, transition probabilities, photo-ionization cross-sections, electron scattering cross-sections are some of the physical properties that can be evaluated by their system. The MCHF method, together with CI techniques and the Breit-Pauli approximation also provides amore » sound theoretical basis for atomic structure calculations.« less
NASA Technical Reports Server (NTRS)
James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.
1988-01-01
Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.
The development and application of CFD technology in mechanical engineering
NASA Astrophysics Data System (ADS)
Wei, Yufeng
2017-12-01
Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.
Improvement of the 2D/1D Method in MPACT Using the Sub-Plane Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
Oak Ridge National Laboratory and the University of Michigan are jointly developing the MPACTcode to be the primary neutron transport code for the Virtual Environment for Reactor Applications (VERA). To solve the transport equation, MPACT uses the 2D/1D method, which decomposes the problem into a stack of 2D planes that are then coupled with a 1D axial calculation. MPACT uses the Method of Characteristics for the 2D transport calculations and P3 for the 1D axial calculations, then accelerates the solution using the 3D Coarse mesh Finite Dierence (CMFD) method. Increasing the number of 2D MOC planes will increase the accuracymore » of the alculation, but will increase the computational burden of the calculations and can cause slow convergence or instability. To prevent these problems while maintaining accuracy, the sub-plane scheme has been implemented in MPACT. This method sub-divides the MOC planes into sub-planes, refining the 1D P3 and 3D CMFD calculations without increasing the number of 2D MOC planes. To test the sub-plane scheme, three of the VERA Progression Problems were selected: Problem 3, a single assembly problem; Problem 4, a 3x3 assembly problem with control rods and pyrex burnable poisons; and Problem 5, a quarter core problem. These three problems demonstrated that the sub-plane scheme can accurately produce intra-plane axial flux profiles that preserve the accuracy of the fine mesh solution. The eigenvalue dierences are negligibly small, and dierences in 3D power distributions are less than 0.1% for realistic axial meshes. Furthermore, the convergence behavior with the sub-plane scheme compares favorably with the conventional 2D/1D method, and the computational expense is decreased for all calculations due to the reduction in expensive MOC calculations.« less
Assessment of sustainable urban transport development based on entropy and unascertained measure.
Li, Yancang; Yang, Jing; Shi, Huawang; Li, Yijie
2017-01-01
To find a more effective method for the assessment of sustainable urban transport development, the comprehensive assessment model of sustainable urban transport development was established based on the unascertained measure. On the basis of considering the factors influencing urban transport development, the comprehensive assessment indexes were selected, including urban economical development, transport demand, environment quality and energy consumption, and the assessment system of sustainable urban transport development was proposed. In view of different influencing factors of urban transport development, the index weight was calculated through the entropy weight coefficient method. Qualitative and quantitative analyses were conducted according to the actual condition. Then, the grade was obtained by using the credible degree recognition criterion from which the urban transport development level can be determined. Finally, a comprehensive assessment method for urban transport development was introduced. The application practice showed that the method can be used reasonably and effectively for the comprehensive assessment of urban transport development.
Year End Progress Report on Rattlesnake Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yaqi; DeHart, Mark David; Gleicher, Frederick Nathan
Rattlesnake is a MOOSE-based radiation transport application developed at INL to support modern multi-physics simulations. At the beginning of the last year, Rattlesnake was able to perform steady-state, transient and eigenvalue calculations for the multigroup radiation transport equations. Various discretization schemes, including continuous finite element method (FEM) with discrete ordinates method (SN) and spherical harmonics expansion method (PN) for the self-adjoint angular flux (SAAF) formulation, continuous FEM (CFEM) with SN for the least square (LS) formulation, diffusion approximation with CFEM and discontinuous FEM (DFEM), have been implemented. A separate toolkit, YAKXS, for multigroup cross section management was developed to supportmore » Rattlesnake calculations with feedback both from changes in the field variables, such as fuel temperature, coolant density, and etc., and in isotope inventory. The framework for doing nonlinear diffusion acceleration (NDA) within Rattlesnake has been set up, and both NDA calculations with SAAF-SN-CFEM scheme and Monte Carlo with OpenMC have been performed. It was also used for coupling BISON and RELAP-7 for the full-core multiphysics simulations. Within the last fiscal year, significant improvements have been made in Rattlesnake. Rattlesnake development was migrated into our internal GITLAB development environment at the end of year 2014. Since then total 369 merge requests has been accepted into Rattlesnake. It is noted that the MOOSE framework that Rattlesnake is based on is under continuous developments. Improvements made in MOOSE can improve the Rattlesnake. It is acknowledged that MOOSE developers spent efforts on patching Rattlesnake for the improvements made on the framework side. This report will not cover the code restructuring for better readability and modularity and documentation improvements, which we have spent tremendous effort on. It only details some of improvements in the following sections.« less
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
Predicting crystalline lens fall caused by accommodation from changes in wavefront error
He, Lin; Applegate, Raymond A.
2011-01-01
PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108
Front panel engineering with CAD simulation tool
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe
1999-04-01
THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.
Viscosity Measurement of Highly Viscous Liquids Using Drop Coalescence in Low Gravity
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin; Maxwell, Daniel
1999-01-01
The method of drop coalescence is being investigated for use as a method for determining the viscosity of highly viscous undercooled liquids. Low gravity environment is necessary in this case to minimize the undesirable effects of body forces and liquid motion in levitated drops. Also, the low gravity environment will allow for investigating large liquid volumes which can lead to much higher accuracy for the viscosity calculations than possible under 1 - g conditions. The drop coalescence method is preferred over the drop oscillation technique since the latter method can only be applied for liquids with vanishingly small viscosities. The technique developed relies on both the highly accurate solution of the Navier-Stokes equations as well as on data from experiments conducted in near zero gravity environment. In the analytical aspect of the method two liquid volumes are brought into contact which will coalesce under the action of surface tension alone. The free surface geometry development as well as its velocity during coalescence which are obtained from numerical computations are compared with an analogous experimental model. The viscosity in the numerical computations is then adjusted to bring into agreement of the experimental results with the calculations. The true liquid viscosity is the one which brings the experiment closest to the calculations. Results are presented for method validation experiments performed recently on board the NASA/KC-135 aircraft. The numerical solution for this validation case was produced using the Boundary Element Method. In these tests the viscosity of a highly viscous liquid, in this case glycerine at room temperature, was determined to high degree of accuracy using the liquid coalescence method. These experiments gave very encouraging results which will be discussed together with plans for implementing the method in a shuttle flight experiment.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1972-01-01
A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.
Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials
Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda
2016-01-01
In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797
Phonon Calculations Using the Real-Space Multigrid Method (RMG)
NASA Astrophysics Data System (ADS)
Zhang, Jiayong; Lu, Wenchang; Briggs, Emil; Cheng, Yongqiang; Ramirez-Cuesta, A. J.; Bernholc, Jerry
RMG, a DFT-based open-source package using the real-space multigrid method, has proven to work effectively on large scale systems with thousands of atoms. Our recent work has shown its practicability for high accuracy phonon calculations employing the frozen phonon method. In this method, a primary unit cell with a small lattice constant is enlarged to a supercell that is sufficiently large to obtain the force constants matrix by finite displacements of atoms in the supercell. An open-source package PhonoPy is used to determine the necessary displacements by taking symmetry into account. A python script coupling RMG and PhonoPy enables us to perform high-throughput calculations of phonon properties. We have applied this method to many systems, such as silicon, silica glass, ZIF-8, etc. Results from RMG are compared to the experimental spectra measured using the VISION inelastic neutron scattering spectrometer at the Spallation Neutron Source at ORNL, as well as results from other DFT codes. The computing resources were made available through the VirtuES (Virtual Experiments in Spectroscopy) project, funded by Laboratory Directed Research and Development program (LDRD project No. 7739)
Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis
NASA Astrophysics Data System (ADS)
Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro
2017-04-01
The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191
The least-squares finite element method for low-mach-number compressible viscous flows
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1994-01-01
The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.
NASA Astrophysics Data System (ADS)
Rohandi, M.; Tuloli, M. Y.; Jassin, R. T.
2018-02-01
This research aims to determine the development of priority of underwater tourism in Gorontalo province using the Analytical Hierarchy Process (AHP) method which is one of DSS methods applying Multi-Attribute Decision Making (MADM). This method used 5 criteria and 28 alternatives to determine the best priority of underwater tourism site development in Gorontalo province. Based on the AHP calculation it appeared that the best priority development of underwater tourism site is Pulau Cinta whose total AHP score is 0.489 or 48.9%. This DSS produced a reliable result, faster solution, time-saving, and low cost for the decision makers to obtain the best underwater tourism site to be developed.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1985-01-01
A brief review is presented of various problems which are confronted in the development of an unsteady finite difference potential code. This review is conducted mainly in the context of what is done for a typical small disturbance and full potential methods. The issues discussed include choice of equation, linearization and conservation, differencing schemes, and algorithm development. A number of applications including unsteady three-dimensional rotor calculation, are demonstrated.
Pilot-in-the-Loop CFD Method Development
2016-02-01
Contract # N00014-14-C-0020 Pilot-in-the-Loop CFD Method Development Progress Report (CDRL A001) Progress Report for Period: October 21...of the aircraft from the rest of its external environment. For example, ship airwake are calculated using CFD solutions without the presence of the...approaches with the goal of real time, fully coupled CFD for virtual dynamic interface modeling & simulation. Penn State is supporting the project
Pilot-in-the Loop CFD Method Development
2016-04-27
Contract # N00014-14-C-0020 Pilot-in-the-Loop CFD Method Development Progress Report (CDRL A001) Progress Report for Period: January 21...aerodynamics of the aircraft from the rest of its external environment. For example, ship airwake are calculated using CFD solutions without the presence of...hardware approaches with the goal of real time, fully coupled CFD for virtual dynamic interface modeling & simulation. Penn State is supporting the project
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
Mörschel, Philipp; Schmidt, Martin U
2015-01-01
A crystallographic quantum-mechanical/molecular-mechanical model (c-QM/MM model) with full space-group symmetry has been developed for molecular crystals. The lattice energy was calculated by quantum-mechanical methods for short-range interactions and force-field methods for long-range interactions. The quantum-mechanical calculations covered the interactions within the molecule and the interactions of a reference molecule with each of the surrounding 12-15 molecules. The interactions with all other molecules were treated by force-field methods. In each optimization step the energies in the QM and MM shells were calculated separately as single-point energies; after adding both energy contributions, the crystal structure (including the lattice parameters) was optimized accordingly. The space-group symmetry was maintained throughout. Crystal structures with more than one molecule per asymmetric unit, e.g. structures with Z' = 2, hydrates and solvates, have been optimized as well. Test calculations with different quantum-mechanical methods on nine small organic molecules revealed that the density functional theory methods with dispersion correction using the B97-D functional with 6-31G* basis set in combination with the DREIDING force field reproduced the experimental crystal structures with good accuracy. Subsequently the c-QM/MM method was applied to nine compounds from the CCDC blind tests resulting in good energy rankings and excellent geometric accuracies.
Sample Size Calculations for Micro-randomized Trials in mHealth
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.
2015-01-01
The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
Development of the triplet singularity for the analysis of wings and bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Woodward, F. A.
1981-01-01
A supersonic triplet singularity was developed which eliminates internal waves generated by panels having supersonic edges. The triplet is a linear combination of source and vortex distributions which gives directional properties to the perturbation flow field surrounding the panel. The theoretical development of the triplet singularity is described together with its application to the calculation of surface pressures on wings and bodies. Examples are presented comparing the results of the new method with other supersonic methods and with experimental data.
Self-organization of developing embryo using scale-invariant approach
2011-01-01
Background Self-organization is a fundamental feature of living organisms at all hierarchical levels from molecule to organ. It has also been documented in developing embryos. Methods In this study, a scale-invariant power law (SIPL) method has been used to study self-organization in developing embryos. The SIPL coefficient was calculated using a centro-axial skew symmetrical matrix (CSSM) generated by entering the components of the Cartesian coordinates; for each component, one CSSM was generated. A basic square matrix (BSM) was constructed and the determinant was calculated in order to estimate the SIPL coefficient. This was applied to developing C. elegans during early stages of embryogenesis. The power law property of the method was evaluated using the straight line and Koch curve and the results were consistent with fractal dimensions (fd). Diffusion-limited aggregation (DLA) was used to validate the SIPL method. Results and conclusion The fractal dimensions of both the straight line and Koch curve showed consistency with the SIPL coefficients, which indicated the power law behavior of the SIPL method. The results showed that the ABp sublineage had a higher SIPL coefficient than EMS, indicating that ABp is more organized than EMS. The fd determined using DLA was higher in ABp than in EMS and its value was consistent with type 1 cluster formation, while that in EMS was consistent with type 2. PMID:21635789
2011-01-01
Background The reliable and robust estimation of ligand binding affinity continues to be a challenge in drug design. Many current methods rely on molecular mechanics (MM) calculations which do not fully explain complex molecular interactions. Full quantum mechanical (QM) computation of the electronic state of protein-ligand complexes has recently become possible by the latest advances in the development of linear-scaling QM methods such as the ab initio fragment molecular orbital (FMO) method. This approximate molecular orbital method is sufficiently fast that it can be incorporated into the development cycle during structure-based drug design for the reliable estimation of ligand binding affinity. Additionally, the FMO method can be combined with approximations for entropy and solvation to make it applicable for binding affinity prediction for a broad range of target and chemotypes. Results We applied this method to examine the binding affinity for a series of published cyclin-dependent kinase 2 (CDK2) inhibitors. We calculated the binding affinity for 28 CDK2 inhibitors using the ab initio FMO method based on a number of X-ray crystal structures. The sum of the pair interaction energies (PIE) was calculated and used to explain the gas-phase enthalpic contribution to binding. The correlation of the ligand potencies to the protein-ligand interaction energies gained from FMO was examined and was seen to give a good correlation which outperformed three MM force field based scoring functions used to appoximate the free energy of binding. Although the FMO calculation allows for the enthalpic component of binding interactions to be understood at the quantum level, as it is an in vacuo single point calculation, the entropic component and solvation terms are neglected. For this reason a more accurate and predictive estimate for binding free energy was desired. Therefore, additional terms used to describe the protein-ligand interactions were then calculated to improve the correlation of the FMO derived values to experimental free energies of binding. These terms were used to account for the polar and non-polar solvation of the molecule estimated by the Poisson-Boltzmann equation and the solvent accessible surface area (SASA), respectively, as well as a correction term for ligand entropy. A quantitative structure-activity relationship (QSAR) model obtained by Partial Least Squares projection to latent structures (PLS) analysis of the ligand potencies and the calculated terms showed a strong correlation (r2 = 0.939, q2 = 0.896) for the 14 molecule test set which had a Pearson rank order correlation of 0.97. A training set of a further 14 molecules was well predicted (r2 = 0.842), and could be used to obtain meaningful estimations of the binding free energy. Conclusions Our results show that binding energies calculated with the FMO method correlate well with published data. Analysis of the terms used to derive the FMO energies adds greater understanding to the binding interactions than can be gained by MM methods. Combining this information with additional terms and creating a scaled model to describe the data results in more accurate predictions of ligand potencies than the absolute values obtained by FMO alone. PMID:21219630
Instanton rate constant calculations close to and above the crossover temperature.
McConnell, Sean; Kästner, Johannes
2017-11-15
Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2 + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Environment of Space Interactions with Space Systems
NASA Technical Reports Server (NTRS)
2004-01-01
The primary product of this research project was a computer program named SAVANT. This program uses the Displacement Damage Dose (DDD) method of calculating radiation damage to solar cells. This calculation method was developed at the Naval Research Laboratory, and uses fundamental physical properties of the solar cell materials to predict radiation damage to the solar cells. This means that fewer experimental measurements are required to characterize the radiation damage to the cells, which results in a substantial cost savings to qualify solar cells for orbital missions. In addition, the DDD method makes it easier to characterize cells that are already being used, but have not been fully tested using the older technique of characterizing radiation damage. The computer program combines an orbit generator with NASA's AP-8 and AE-8 models of trapped protons and electrons. This allows the user to specify an orbit, and the program will calculate how the spacecraft moves during the mission, and the radiation environment that it encounters. With the spectrum of the particles, the program calculates how they would slow down while traversing the coverglass, and provides a slowed-down spectrum.
NASA Astrophysics Data System (ADS)
Cox, Courtney E.; Phifer, Jeremy R.; Ferreira da Silva, Larissa; Gonçalves Nogueira, Gabriel; Ley, Ryan T.; O'Loughlin, Elizabeth J.; Pereira Barbosa, Ana Karolyne; Rygelski, Brett T.; Paluch, Andrew S.
2017-02-01
Solubility parameter based methods have long been a valuable tool for solvent formulation and selection. Of these methods, the MOdified Separation of Cohesive Energy Density (MOSCED) has recently been shown to correlate well the equilibrium solubility of multifunctional non-electrolyte solids. However, before it can be applied to a novel solute, a limited amount of reference solubility data is required to regress the necessary MOSCED parameters. Here we demonstrate for the solutes methylparaben, ethylparaben, propylparaben, butylparaben, lidocaine and ephedrine how conventional molecular simulation free energy calculations or electronic structure calculations in a continuum solvent, here the SMD or SM8 solvation model, can instead be used to generate the necessary reference data, resulting in a predictive flavor of MOSCED. Adopting the melting point temperature and enthalpy of fusion of these compounds from experiment, we are able to predict equilibrium solubilities. We find the method is able to well correlate the (mole fraction) equilibrium solubility in non-aqueous solvents over four orders of magnitude with good quantitative agreement.
Cox, Courtney E; Phifer, Jeremy R; Ferreira da Silva, Larissa; Gonçalves Nogueira, Gabriel; Ley, Ryan T; O'Loughlin, Elizabeth J; Pereira Barbosa, Ana Karolyne; Rygelski, Brett T; Paluch, Andrew S
2017-02-01
Solubility parameter based methods have long been a valuable tool for solvent formulation and selection. Of these methods, the MOdified Separation of Cohesive Energy Density (MOSCED) has recently been shown to correlate well the equilibrium solubility of multifunctional non-electrolyte solids. However, before it can be applied to a novel solute, a limited amount of reference solubility data is required to regress the necessary MOSCED parameters. Here we demonstrate for the solutes methylparaben, ethylparaben, propylparaben, butylparaben, lidocaine and ephedrine how conventional molecular simulation free energy calculations or electronic structure calculations in a continuum solvent, here the SMD or SM8 solvation model, can instead be used to generate the necessary reference data, resulting in a predictive flavor of MOSCED. Adopting the melting point temperature and enthalpy of fusion of these compounds from experiment, we are able to predict equilibrium solubilities. We find the method is able to well correlate the (mole fraction) equilibrium solubility in non-aqueous solvents over four orders of magnitude with good quantitative agreement.
Fist Principles Approach to the Magneto Caloric Effect: Application to Ni2MnGa
NASA Astrophysics Data System (ADS)
Odbadrakh, Khorgolkhuu; Nicholson, Don; Rusanu, Aurelian; Eisenbach, Markus; Brown, Gregory; Evans, Boyd, III
2011-03-01
The magneto-caloric effect (MCE) has potential application in heating and cooling technologies. In this work, we present calculated magnetic structure of a candidate MCE material, Ni 2 MnGa. The magnetic configurations of a 144 atom supercell is first explored using first-principle, the results are then used to fit exchange parameters of a Heisenberg Hamiltonian. The Wang-Landau method is used to calculate the magnetic density of states of the Heisenberg Hamiltonian. Based on this classical estimate, the magnetic density of states is calculated using the Wang Landau method with energies obtained from the first principles method. The Currie temperature and other thermodynamic properties are calculated using the density of states. The relationships between the density of magnetic states and the field induced adiabatic temperature change and isothermal entropy change are discussed. This work was sponsored by the Laboratory Directed Research and Development Program (ORNL), by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research (US DOE), and by the Materials Sciences and Engineering Division; Office of Basic Energy Sciences (US DOE).