Sample records for method calculations final

  1. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  2. Propellant Mass Fraction Calculation Methodology for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.

  3. Using "Functional" Target Coordinates of the Subthalamic Nucleus to Assess the Indirect and Direct Methods of the Preoperative Planning: Do the Anatomical and Functional Targets Coincide?

    PubMed

    Rabie, Ahmed; Verhagen Metman, Leo; Slavin, Konstantin V

    2016-12-21

    To answer the question of whether the anatomical center of the subthalamic nucleus (STN), as calculated indirectly from stereotactic atlases or by direct visualization on magnetic resonance imaging (MRI), corresponds to the best functional target. Since the neighboring red nucleus (RN) is well visualized on MRI, we studied the relationships of the final target to its different borders. We analyzed the data of 23 PD patients (46 targets) who underwent bilateral frame-based STN deep brain stimulation (DBS) procedure with microelectrode recording guidance. We calculated coordinates of the active contact on DBS electrode on postoperative MRI, which we referred to as the final "functional/optimal" target. The coordinates calculated by the atlas-based "indirect" and "direct" methods, as well as the coordinates of the different RN borders were compared to these final coordinates. The mean ± SD of the final target coordinates was 11.7 ± 1.5 mm lateral (X), 2.4 ± 1.5 mm posterior (Y), and 6.1 ± 1.7 mm inferior to the mid-commissural point (Z). No significant differences were found between the "indirect" X, Z coordinates and those of the final targets. The "indirect" Y coordinate was significantly posterior to Y of the final target, with mean difference of 0.6 mm ( p = 0.014). No significant differences were found between the "direct" X, Y, and Z coordinates and those of the final targets. The functional STN target is located in direct proximity to its anatomical center. During preoperative targeting, we recommend using the "direct" method, and taking into consideration the relationships of the final target to the mid-commissural point (MCP) and the different RN borders.

  4. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  5. Propellant Mass Fraction Calculation Methodology for Launch Vehicles and Application to Ares Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.

  6. A general formalism for phase space calculations

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Deutchman, Philip A.; Townsend, Lawrence W.; Cucinotta, Francis A.

    1988-01-01

    General formulas for calculating the interactions of galactic cosmic rays with target nuclei are presented. Methods for calculating the appropriate normalization volume elements and phase space factors are presented. Particular emphasis is placed on obtaining correct phase space factors for 2-, and 3-body final states. Calculations for both Lorentz-invariant and noninvariant phase space are presented.

  7. Fluctuations of thermodynamic quantities calculated from the fundamental equation of thermodynamics

    NASA Astrophysics Data System (ADS)

    Yan, Zijun; Chen, Jincan

    1992-02-01

    On the basis of the probability distribution of the various values of the fluctuation and the fundamental equation of thermodynamics of any given system, a simple and useful method of calculating the fluctuations is presented. By using the method, the fluctuations of thermodynamic quantities can be directly determined from the fundamental equation of thermodynamics. Finally, some examples are given to illustrate the use of the method.

  8. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  9. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    PubMed

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  10. Comparison of the Effects of Cooperative Learning and Traditional Learning Methods on the Improvement of Drug-Dose Calculation Skills of Nursing Students Undergoing Internships

    ERIC Educational Resources Information Center

    Basak, Tulay; Yildiz, Dilek

    2014-01-01

    Objective: The aim of this study was to compare the effectiveness of cooperative learning and traditional learning methods on the development of drug-calculation skills. Design: Final-year nursing students ("n" = 85) undergoing internships during the 2010-2011 academic year at a nursing school constituted the study group of this…

  11. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    NASA Astrophysics Data System (ADS)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  12. Calculation of the compounded uncertainty of 14C AMS measurements

    NASA Astrophysics Data System (ADS)

    Nadeau, Marie-Josée; Grootes, Pieter M.

    2013-01-01

    The correct method to calculate conventional 14C ages from the carbon isotopic ratios was summarised 35 years ago by Stuiver and Polach (1977) and is now accepted as the only method to calculate 14C ages. There is, however, no consensus regarding the treatment of AMS data, mainly of the uncertainty of the final result. The estimation and treatment of machine background, process blank, and/or in situ contamination is not uniform between laboratories, leading to differences in 14C results, mainly for older ages. As Donahue (1987) and Currie (1994), among others, mentioned, some laboratories find it important to use the scatter of several measurements as uncertainty while others prefer to use Poisson statistics. The contribution of the scatter of the standards, machine background, process blank, and in situ contamination to the uncertainty of the final 14C result is also treated in different ways. In the early years of AMS, several laboratories found it important to describe their calculation process in details. In recent years, this practise has declined. We present an overview of the calculation process for 14C AMS measurements looking at calculation practises published from the beginning of AMS until present.

  13. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  14. Electron-impact ionization of atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Baertschy, Mark David

    2000-10-01

    Since the invention of quantum mechanics, even the simplest example of collisional breakup in a system of charged particles, e - + H --> H+ + e- + e-, has stood as one of the last unsolved fundamental problems in atomic physics. A complete solution requires calculating the energies and directions for a final state in which three charged particles are moving apart. Advances in the formal description of three-body breakup have yet to lead to a viable computational method. Traditional approaches, based on two-body formalisms, have been unable to produce differential cross sections for the three-body final state. Now, by using a mathematical transformation of the Schrödinger equation that makes the final state tractable, a complete solution has finally been achieved. Under this transformation, the scattering wave function can be calculated without imposing explicit scattering boundary conditions. This approach has produced the first triple differential cross sections that agree on an absolute scale with experiment as well as the first ab initio calculations of the single differential cross section [29].

  15. Electron-impact ionization of atomic hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baertschy, Mark D.

    2000-02-01

    Since the invention of quantum mechanics, even the simplest example of collisional breakup in a system of charged particles, e - + H → H + + e - + e +, has stood as one of the last unsolved fundamental problems in atomic physics. A complete solution requires calculating the energies and directions for a final state in which three charged particles are moving apart. Advances in the formal description of three-body breakup have yet to lead to a viable computational method. Traditional approaches, based on two-body formalisms, have been unable to produce differential cross sections for the three-bodymore » final state. Now, by using a mathematical transformation of the Schrodinger equation that makes the final state tractable, a complete solution has finally been achieved, Under this transformation, the scattering wave function can be calculated without imposing explicit scattering boundary conditions. This approach has produced the first triple differential cross sections that agree on an absolute scale with experiment as well as the first ab initio calculations of the single differential cross section.« less

  16. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  17. Compensation of Horizontal Gravity Disturbances for High Precision Inertial Navigation

    PubMed Central

    Cao, Juliang; Wu, Meiping; Lian, Junxiang; Cai, Shaokun; Wang, Lin

    2018-01-01

    Horizontal gravity disturbances are an important factor that affects the accuracy of inertial navigation systems in long-duration ship navigation. In this paper, from the perspective of the coordinate system and vector calculation, the effects of horizontal gravity disturbance on the initial alignment and navigation calculation are simultaneously analyzed. Horizontal gravity disturbances cause the navigation coordinate frame built in initial alignment to not be consistent with the navigation coordinate frame in which the navigation calculation is implemented. The mismatching of coordinate frame violates the vector calculation law, which will have an adverse effect on the precision of the inertial navigation system. To address this issue, two compensation methods suitable for two different navigation coordinate frames are proposed, one of the methods implements the compensation in velocity calculation, and the other does the compensation in attitude calculation. Finally, simulations and ship navigation experiments confirm the effectiveness of the proposed methods. PMID:29562653

  18. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  19. Speckle noise suppression method in holographic display using time multiplexing

    NASA Astrophysics Data System (ADS)

    Liu, Su-Juan; Wang, Di; Li, Song-Jie; Wang, Qiong-Hua

    2017-06-01

    We propose a method to suppress the speckle noise in holographic display using time multiplexing. The diffractive optical elements (DOEs) and the subcomputer-generated holograms (sub-CGHs) are generated, respectively. The final image is reconstructed using time multiplexing of the subimages and the final subimages. Meanwhile, the speckle noise of the final image is suppressed by reducing the coherence of the reconstructed light and separating the adjacent image points in space. Compared with the pixel separation method, the experiments demonstrate that the proposed method suppresses the speckle noise effectively with less calculation burden and lower demand for frame rate of the spatial light modulator. In addition, with increases of the DOEs and the sub-CGHs, the speckle noise is further suppressed.

  20. Computer-assisted uncertainty assessment of k0-NAA measurement results

    NASA Astrophysics Data System (ADS)

    Bučar, T.; Smodiš, B.

    2008-10-01

    In quantifying measurement uncertainty of measurement results obtained by the k0-based neutron activation analysis ( k0-NAA), a number of parameters should be considered and appropriately combined in deriving the final budget. To facilitate this process, a program ERON (ERror propagatiON) was developed, which computes uncertainty propagation factors from the relevant formulae and calculates the combined uncertainty. The program calculates uncertainty of the final result—mass fraction of an element in the measured sample—taking into account the relevant neutron flux parameters such as α and f, including their uncertainties. Nuclear parameters and their uncertainties are taken from the IUPAC database (V.P. Kolotov and F. De Corte, Compilation of k0 and related data for NAA). Furthermore, the program allows for uncertainty calculations of the measured parameters needed in k0-NAA: α (determined with either the Cd-ratio or the Cd-covered multi-monitor method), f (using the Cd-ratio or the bare method), Q0 (using the Cd-ratio or internal comparator method) and k0 (using the Cd-ratio, internal comparator or the Cd subtraction method). The results of calculations can be printed or exported to text or MS Excel format for further analysis. Special care was taken to make the calculation engine portable by having possibility of its incorporation into other applications (e.g., DLL and WWW server). Theoretical basis and the program are described in detail, and typical results obtained under real measurement conditions are presented.

  1. Innovative methods for calculation of freeway travel time using limited data : final report.

    DOT National Transportation Integrated Search

    2008-01-01

    Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...

  2. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services

    PubMed Central

    Rajabi, A; Dabiri, A

    2012-01-01

    Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171

  3. Synthesis of calculational methods for design and analysis of radiation shields for nuclear rocket systems

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.; Jordan, T. A.; Soltesz, R. G.; Woodsum, H. C.

    1969-01-01

    Eight computer programs make up a nine volume synthesis containing two design methods for nuclear rocket radiation shields. The first design method is appropriate for parametric and preliminary studies, while the second accomplishes the verification of a final nuclear rocket reactor design.

  4. Computational techniques in tribology and material science at the atomic level

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  5. Stability of disclination loop in pure twist nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Kadivar, Erfan

    2018-04-01

    In this work, the annihilations dynamics and stability of disclination loop in a bulk pure twist nematic liquid crystal are investigated. This work is based on the Frank free energy and the nematodynamics equations. The energy dissipation is calculated by using two methods. In the first method, the energy dissipation is obtained from the Frank free energy. In the second method, it is calculated by using the nematodynamics equations. Finally, we derive a critical radius of disclination loop that above this radius, loop creation is energetically forbidden.

  6. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  7. Generalization of the Mulliken-Hush treatment for the calculation of electron transfer matrix elements

    NASA Astrophysics Data System (ADS)

    Cave, Robert J.; Newton, Marshall D.

    1996-01-01

    A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.

  8. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  9. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  10. Ensuring the validity of calculated subcritical limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less

  11. Stability of streamwise vortices

    NASA Technical Reports Server (NTRS)

    Khorrami, M. K.; Grosch, C. E.; Ash, R. L.

    1987-01-01

    A brief overview of some theoretical and computational studies of the stability of streamwise vortices is given. The local induction model and classical hydrodynamic vortex stability theories are discussed in some detail. The importance of the three-dimensionality of the mean velocity profile to the results of stability calculations is discussed briefly. The mean velocity profile is provided by employing the similarity solution of Donaldson and Sullivan. The global method of Bridges and Morris was chosen for the spatial stability calculations for the nonlinear eigenvalue problem. In order to test the numerical method, a second order accurate central difference scheme was used to obtain the coefficient matrices. It was shown that a second order finite difference method lacks the required accuracy for global eigenvalue calculations. Finally the problem was formulated using spectral methods and a truncated Chebyshev series.

  12. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  13. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    PubMed

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  14. Evaluation of Neutron-induced Cross Sections and their Related Covariances with Physical Constraints

    NASA Astrophysics Data System (ADS)

    De Saint Jean, C.; Archier, P.; Privas, E.; Noguère, G.; Habert, B.; Tamagno, P.

    2018-02-01

    Nuclear data, along with numerical methods and the associated calculation schemes, continue to play a key role in reactor design, reactor core operating parameters calculations, fuel cycle management and criticality safety calculations. Due to the intensive use of Monte-Carlo calculations reducing numerical biases, the final accuracy of neutronic calculations increasingly depends on the quality of nuclear data used. This paper gives a broad picture of all ingredients treated by nuclear data evaluators during their analyses. After giving an introduction to nuclear data evaluation, we present implications of using the Bayesian inference to obtain evaluated cross sections and related uncertainties. In particular, a focus is made on systematic uncertainties appearing in the analysis of differential measurements as well as advantages and drawbacks one may encounter by analyzing integral experiments. The evaluation work is in general done independently in the resonance and in the continuum energy ranges giving rise to inconsistencies in evaluated files. For future evaluations on the whole energy range, we call attention to two innovative methods used to analyze several nuclear reaction models and impose constraints. Finally, we discuss suggestions for possible improvements in the evaluation process to master the quantification of uncertainties. These are associated with experiments (microscopic and integral), nuclear reaction theories and the Bayesian inference.

  15. Spinor helicity methods in high-energy factorization: Efficient momentum-space calculations in the Color Glass Condensate formalism

    NASA Astrophysics Data System (ADS)

    Ayala, Alejandro; Hentschinski, Martin; Jalilian-Marian, Jamal; Tejeda-Yeomans, Maria Elena

    2017-07-01

    We use the spinor helicity formalism to calculate the cross section for production of three partons of a given polarization in Deep Inelastic Scattering (DIS) off proton and nucleus targets at small Bjorken x. The target proton or nucleus is treated as a classical color field (shock wave) from which the produced partons scatter multiple times. We reported our result for the final expression for the production cross section and studied the azimuthal angular correlations of the produced partons in [1]. Here we provide the full details of the calculation of the production cross section using the spinor helicity methods.

  16. Calculation of photoionization differential cross sections using complex Gauss-type orbitals.

    PubMed

    Matsuzaki, Rei; Yabushita, Satoshi

    2017-09-05

    Accurate theoretical calculation of photoelectron angular distributions for general molecules is becoming an important tool to image various chemical reactions in real time. We show in this article that not only photoionization total cross sections but also photoelectron angular distributions can be accurately calculated using complex Gauss-type orbital (cGTO) basis functions. Our method can be easily combined with existing quantum chemistry techniques including electron correlation effects, and applied to various molecules. The so-called two-potential formula is applied to represent the transition dipole moment from an initial bound state to a final continuum state in the molecular coordinate frame. The two required continuum functions, the zeroth-order final continuum state and the first-order wave function induced by the photon field, have been variationally obtained using the complex basis function method with a mixture of appropriate cGTOs and conventional real Gauss-type orbitals (GTOs) to represent the continuum orbitals as well as the remaining bound orbitals. The complex orbital exponents of the cGTOs are optimized by fitting to the outgoing Coulomb functions. The efficiency of the current method is demonstrated through the calculations of the asymmetry parameters and molecular-frame photoelectron angular distributions of H2+ and H2 . In the calculations of H2 , the static exchange and random phase approximations are employed, and the dependence of the results on the basis functions is discussed. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  18. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  19. Journal of Chinese Society of Astronautics (Selected Articles),

    DTIC Science & Technology

    1983-03-10

    Graphics Disclaimer...................... ..... .. . .. .. . . ... Calculation of Minimum Entry Heat Transfer Shape of a Space * Vehicle , by, Zhou Qi...the best quality copy available. ..- ii CALCULATION OF MINIMUM ENTRY HEAT TRANSFER SHAPE OF A SPACE VEHICLE Zhou Qi cheng ABSTRACT This paper dealt...entry heat transfer shape under specified fineness ratio and total vehicle weight conditions could be obtained using a variational method. Finally, the

  20. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  1. Nudged elastic band method and density functional theory calculation for finding a local minimum energy pathway of p-benzoquinone and phenol fragmentation in mass spectrometry.

    PubMed

    Sugimura, Natsuhiko; Igarashi, Yoko; Aoyama, Reiko; Shibue, Toshimichi

    2017-02-01

    Analysis of the fragmentation pathways of molecules in mass spectrometry gives a fundamental insight into gas-phase ion chemistry. However, the conventional intrinsic reaction coordinates method requires knowledge of the transition states of ion structures in the fragmentation pathways. Herein, we use the nudged elastic band method, using only the initial and final state ion structures in the fragmentation pathways, and report the advantages and limitations of the method. We found a minimum energy path of p-benzoquinone ion fragmentation with two saddle points and one intermediate structure. The primary energy barrier, which corresponded to the cleavage of the C-C bond adjacent to the CO group, was calculated to be 1.50 eV. An additional energy barrier, which corresponded to the cleavage of the CO group, was calculated to be 0.68 eV. We also found an energy barrier of 3.00 eV, which was the rate determining step of the keto-enol tautomerization in CO elimination from the molecular ion of phenol. The nudged elastic band method allowed the determination of a minimum energy path using only the initial and final state ion structures in the fragmentation pathways, and it provided faster than the conventional intrinsic reaction coordinates method. In addition, this method was found to be effective in the analysis of the charge structures of the molecules during the fragmentation in mass spectrometry.

  2. Identifying strategies to assist final semester nursing students to develop numeracy skills: a mixed methods study.

    PubMed

    Ramjan, Lucie M; Stewart, Lyn; Salamonson, Yenna; Morris, Maureen M; Armstrong, Lyn; Sanchez, Paula; Flannery, Liz

    2014-03-01

    It remains a grave concern that many nursing students within tertiary institutions continue to experience difficulties with achieving medication calculation competency. In addition, universities have a moral responsibility to prepare proficient clinicians for graduate practice. This requires risk management strategies to reduce adverse medication errors post registration. To identify strategies and potential predictors that may assist nurse academics to tailor their drug calculation teaching and assessment methods. This project builds on previous experience and explores students' perceptions of newly implemented interventions designed to increase confidence and competence in medication calculation. This mixed method study surveyed students (n=405) enrolled in their final semester of study at a large, metropolitan university in Sydney, Australia. Tailored, contextualised interventions included online practice quizzes, simulated medication calculation scenarios developed for clinical practice classes, contextualised 'pen and paper' tests, visually enhanced didactic remediation and 'hands-on' contextualised workshops. Surveys were administered to students to determine their perceptions of interventions and to identify whether these interventions assisted with calculation competence. Test scores were analysed using SPSS v. 20 for correlations between students' perceptions and actual performance. Qualitative open-ended survey questions were analysed manually and thematically. The study reinforced that nursing students preferred a 'hands-on,' contextualised approach to learning that was 'authentic' and aligned with clinical practice. Our interventions assisted with supporting students' learning and improvement of calculation confidence. Qualitative data provided further insight into students' awareness of their calculation errors and preferred learning styles. Some of the strongest predictors for numeracy skill performance included (1) being an international student, (2) completion of an online practice quiz, scoring 59% or above and (3) students' self-reported confidence. A paradigm shift from traditional testing methods to the implementation of intensive, contextualised numeracy teaching and assessment within tertiary institutions will enhance learning and promote best teaching practices. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Alcohol-related hot-spot analysis and prediction : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This project developed methods to more accurately identify alcohol-related crash hot spots, ultimately allowing for more effective and efficient enforcement and safety campaigns. Advancements in accuracy came from improving the calculation of spatial...

  4. The calculation of steady non-linear transonic flow over finite wings with linear theory aerodynamics

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The feasibility of calculating steady mean flow solutions for nonlinear transonic flow over finite wings with a linear theory aerodynamic computer program is studied. The methodology is based on independent solutions for upper and lower surface pressures that are coupled through the external flow fields. Two approaches for coupling the solutions are investigated which include the diaphragm and the edge singularity method. The final method is a combination of both where a line source along the wing leading edge is used to account for blunt nose airfoil effects; and the upper and lower surface flow fields are coupled through a diaphragm in the plane of the wing. An iterative solution is used to arrive at the nonuniform flow solution for both nonlifting and lifting cases. Final results for a swept tapered wing in subcritical flow show that the method converges in three iterations and gives excellent agreement with experiment at alpha = 0 deg and 2 deg. Recommendations are made for development of a procedure for routine application.

  5. Calculation of subsonic and supersonic steady and unsteady aerodynamic forces using velocity potential aerodynamic elements

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.; Yoo, Y. S.

    1976-01-01

    Expressions for calculation of subsonic and supersonic, steady and unsteady aerodynamic forces are derived, using the concept of aerodynamic elements applied to the downwash velocity potential method. Aerodynamic elements can be of arbitrary out of plane polygon shape, although numerical calculations are restricted to rectangular elements, and to the steady state case in the supersonic examples. It is suggested that the use of conforming, in place of rectangular elements, would give better results. Agreement with results for subsonic oscillating T tails is fair, but results do not converge as the number of collocation points is increased. This appears to be due to the form of expression used in the calculations. The methods derived are expected to facilitate automated flutter analysis on the computer. In particular, the aerodynamic element concept is consistent with finite element methods already used for structural analysis. The method is universal for the complete Mach number range, and, finally, the calculations can be arranged so that they do not have to be repeated completely for every reduced frequency.

  6. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  7. [A computer tomography assisted method for the automatic detection of region of interest in dynamic kidney images].

    PubMed

    Jing, Xueping; Zheng, Xiujuan; Song, Shaoli; Liu, Kai

    2017-12-01

    Glomerular filtration rate (GFR), which can be estimated by Gates method with dynamic kidney single photon emission computed tomography (SPECT) imaging, is a key indicator of renal function. In this paper, an automatic computer tomography (CT)-assisted detection method of kidney region of interest (ROI) is proposed to achieve the objective and accurate GFR calculation. In this method, the CT coronal projection image and the enhanced SPECT synthetic image are firstly generated and registered together. Then, the kidney ROIs are delineated using a modified level set algorithm. Meanwhile, the background ROIs are also obtained based on the kidney ROIs. Finally, the value of GFR is calculated via Gates method. Comparing with the clinical data, the GFR values estimated by the proposed method were consistent with the clinical reports. This automatic method can improve the accuracy and stability of kidney ROI detection for GFR calculation, especially when the kidney function has been severely damaged.

  8. Investigation of thermal protection systems effects on viscid and inviscid flow fields for manned entry systems

    NASA Technical Reports Server (NTRS)

    Bartlett, E. P.; Morse, H. L.; Tong, H.

    1971-01-01

    Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.

  9. Deconstructing Calculation Methods, Part 4: Division

    ERIC Educational Resources Information Center

    Thompson, Ian

    2008-01-01

    In the final article of a series of four, the author deconstructs the primary national strategy's approach to written division. The approach to division is divided into five stages: (1) mental division using partition; (2) short division of TU / U; (3) "expanded" method for HTU / U; (4) short division of HTU / U; and (5) long division.…

  10. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  11. Electron- and positron-impact atomic scattering calculations using propagating exterior complex scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.

    2007-11-01

    Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.

  12. Traffic Data Quality Measurement : Final Report

    DOT National Transportation Integrated Search

    2004-09-15

    One of the foremost recommendations from the FHWA sponsored workshops on Traffic Data Quality (TDQ) in 2003 was a call for "guidelines and standards for calculating data quality measures." These guidelines and standards are expected to contain method...

  13. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  14. Efficient and accurate modeling of electron photoemission in nanostructures with TDDFT

    NASA Astrophysics Data System (ADS)

    Wopperer, Philipp; De Giovannini, Umberto; Rubio, Angel

    2017-03-01

    We derive and extend the time-dependent surface-flux method introduced in [L. Tao, A. Scrinzi, New J. Phys. 14, 013021 (2012)] within a time-dependent density-functional theory (TDDFT) formalism and use it to calculate photoelectron spectra and angular distributions of atoms and molecules when excited by laser pulses. We present other, existing computational TDDFT methods that are suitable for the calculation of electron emission in compact spatial regions, and compare their results. We illustrate the performance of the new method by simulating strong-field ionization of C60 fullerene and discuss final state effects in the orbital reconstruction of planar organic molecules.

  15. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Fei; Zhen, Zhao; Liu, Chun

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  16. Image phase shift invariance based cloud motion displacement vector calculation method for ultra-short-term solar PV power forecasting

    DOE PAGES

    Wang, Fei; Zhen, Zhao; Liu, Chun; ...

    2017-12-18

    Irradiance received on the earth's surface is the main factor that affects the output power of solar PV plants, and is chiefly determined by the cloud distribution seen in a ground-based sky image at the corresponding moment in time. It is the foundation for those linear extrapolation-based ultra-short-term solar PV power forecasting approaches to obtain the cloud distribution in future sky images from the accurate calculation of cloud motion displacement vectors (CMDVs) by using historical sky images. Theoretically, the CMDV can be obtained from the coordinate of the peak pulse calculated from a Fourier phase correlation theory (FPCT) method throughmore » the frequency domain information of sky images. The peak pulse is significant and unique only when the cloud deformation between two consecutive sky images is slight enough, which is likely possible for a very short time interval (such as 1?min or shorter) with common changes in the speed of cloud. Sometimes, there will be more than one pulse with similar values when the deformation of the clouds between two consecutive sky images is comparatively obvious under fast changing cloud speeds. This would probably lead to significant errors if the CMDVs were still only obtained from the single coordinate of the peak value pulse. However, the deformation estimation of clouds between two images and its influence on FPCT-based CMDV calculations are terrifically complex and difficult because the motion of clouds is complicated to describe and model. Therefore, to improve the accuracy and reliability under these circumstances in a simple manner, an image-phase-shift-invariance (IPSI) based CMDV calculation method using FPCT is proposed for minute time scale solar power forecasting. First, multiple different CMDVs are calculated from the corresponding consecutive images pairs obtained through different synchronous rotation angles compared to the original images by using the FPCT method. Second, the final CMDV is generated from all of the calculated CMDVs through a centroid iteration strategy based on its density and distance distribution. Third, the influence of different rotation angle resolution on the final CMDV is analyzed as a means of parameter estimation. Simulations under various scenarios including both thick and thin clouds conditions indicated that the proposed IPSI-based CMDV calculation method using FPCT is more accurate and reliable than the original FPCT method, optimal flow (OF) method, and particle image velocimetry (PIV) method.« less

  17. Thermal calculations pertaining to experiments in the Yucca Mountain Exploratory Shaft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montan, D.N.

    1986-03-01

    A series of thermal calculations have been presented that appear to satisfy the needs for design of the Yucca Mountain Exploratory Shaft Tests. The accuracy of the modeling and calculational techniques employed probably exceeds the accuracy of the thermal properties used. The rather close agreement between simple analytical methods (the PLUS Family) and much more complex methods (TRUMP) suggest that the PLUS Family might be appropriate during final design to model, in a single calculation, the entire test array and sequence. Before doing further calculations it is recommended that all available thermal property information be critically evaluated to determine "best"more » values to be used for conductivity and saturation. Another possibility is to design one or more of the test sequences to approximately duplicate the early phase of Heater Test 1. In that experiment an unplanned power outage for about two days that occurred a week into the experiment gave extremely useful data from which to determine the conductivity and diffusivity. In any case we urge that adequate, properly calibrated instrumentation with data output available on a quasi-real time basis be installed. This would allow us to take advantage of significant power changes (planned or not) and also help "steer" the tests to desired temperatures. Finally, it should be kept in mind that the calculations presented here are strictly thermal. No hydrothermal effects due to liquid and vapor pressures have been considered.« less

  18. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Walton, E.; Aebker, E.; Mata, F.; Reilly, C.

    1991-01-01

    The final phase of a satellite synthesis project is described. Several methods for generating satellite positionings with improved aggregate carrier to interference characteristics were studied. Two general methods for modifying required separation values are presented. Also, two methods for improving aggregate carrier to interference (C/I) performance of given satellite synthesis solutions are presented. A perturbation of the World Administrative Radio Conference (WARC) synthesis is presented.

  19. The orbital evolution of the AMOR asteroidal group during 11,550 years

    NASA Astrophysics Data System (ADS)

    Babadzhanov, P. B.; Zausaev, A. F.; Pushkaryov, A. N.

    The orbital evolution of twenty seven Amor asteroids was determined by the Everhart method for the time interval from 2250 AD to 9300 BC. Closest encounters with terrestrial planets are calculated in the evolution process. Stable resonances with Venus, Earth and Jupiter over the period from 2250 AD to 9300 BC have been obtained. Theoretical coordinates of radiants on initial and final moments of integrating were calculated.

  20. Control of electromagnetic stirring by power focusing in large induction crucible furnaces

    NASA Astrophysics Data System (ADS)

    Frizen, V. E.; Sarapulov, F. N.

    2011-12-01

    An approach is proposed for the calculation of the operating conditions of an induction crucible furnace at the final stage of melting with the power focused in various regions of melted metal. The calculation is performed using a model based on the method of detailed magnetic equivalent circuits. The combination of the furnace and a thyristor frequency converter is taken into account in modeling.

  1. Direct Discrete Method for Neutronic Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less

  2. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  3. Solving of the coefficient inverse problems for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time data

    NASA Astrophysics Data System (ADS)

    Lukyanenko, D. V.; Shishlenin, M. A.; Volkov, V. T.

    2018-01-01

    We propose the numerical method for solving coefficient inverse problem for a nonlinear singularly perturbed reaction-diffusion-advection equation with the final time observation data based on the asymptotic analysis and the gradient method. Asymptotic analysis allows us to extract a priory information about interior layer (moving front), which appears in the direct problem, and boundary layers, which appear in the conjugate problem. We describe and implement the method of constructing a dynamically adapted mesh based on this a priory information. The dynamically adapted mesh significantly reduces the complexity of the numerical calculations and improve the numerical stability in comparison with the usual approaches. Numerical example shows the effectiveness of the proposed method.

  4. Quantum chemical calculations of glycine glutaric acid

    NASA Astrophysics Data System (ADS)

    Arioǧlu, ćaǧla; Tamer, Ömer; Avci, Davut; Atalay, Yusuf

    2017-02-01

    Density functional theory (DFT) calculations of glycine glutaric acid were performed by using B3LYP levels with 6-311++G(d,p) basis set. The theoretical structural parameters such as bond lengths and bond angles are in a good agreement with the experimental values of the title compound. HOMO and LUMO energies were calculated, and the obtained energy gap shows that charge transfer occurs in the title compound. Vibrational frequencies were calculated and compare with experimental ones. 3D molecular surfaces of the title compound were simulated using the same level and basis set. Finally, the 13C and 1H NMR chemical shift values were calculated by the application of the gauge independent atomic orbital (GIAO) method.

  5. Calculation of rates of exciton dissociation into hot charge-transfer states in model organic photovoltaic interfaces

    NASA Astrophysics Data System (ADS)

    Vázquez, Héctor; Troisi, Alessandro

    2013-11-01

    We investigate the process of exciton dissociation in ordered and disordered model donor/acceptor systems and describe a method to calculate exciton dissociation rates. We consider a one-dimensional system with Frenkel states in the donor material and states where charge transfer has taken place between donor and acceptor. We introduce a Green's function approach to calculate the generation rates of charge-transfer states. For disorder in the Frenkel states we find a clear exponential dependence of charge dissociation rates with exciton-interface distance, with a distance decay constant β that increases linearly with the amount of disorder. Disorder in the parameters that describe (final) charge-transfer states has little effect on the rates. Exciton dissociation invariably leads to partially separated charges. In all cases final states are “hot” charge-transfer states, with electron and hole located far from the interface.

  6. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  7. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  8. Magnetic properties of vanadium doped CdTe: Ab initio calculations

    NASA Astrophysics Data System (ADS)

    Goumrhar, F.; Bahmad, L.; Mounkachi, O.; Benyoussef, A.

    2017-04-01

    In this paper, we are applying the ab initio calculations to study the magnetic properties of vanadium doped CdTe. This study is based on the Korringa-Kohn-Rostoker method (KKR) combined with the coherent potential approximation (CPA), within the local density approximation (LDA). This method is called KKR-CPA-LDA. We have calculated and plotted the density of states (DOS) in the energy diagram for different concentrations of dopants. We have also investigated the magnetic and half-metallic properties of this compound and shown the mechanism of exchange interaction. Moreover, we have estimated the Curie temperature Tc for different concentrations. Finally, we have shown how the crystal field and the exchange splittings vary as a function of the concentrations.

  9. Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-06-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.

  10. Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-07-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.

  11. Scattering matrix of arbitrary tight-binding Hamiltonians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramírez, C., E-mail: carlos@ciencias.unam.mx; Medina-Amayo, L.A.

    2017-03-15

    A novel efficient method to calculate the scattering matrix (SM) of arbitrary tight-binding Hamiltonians is proposed, including cases with multiterminal structures. In particular, the SM of two kinds of fundamental structures is given, which can be used to obtain the SM of bigger systems iteratively. Also, a procedure to obtain the SM of layer-composed periodic leads is described. This method allows renormalization approaches, which permits computations over macroscopic length systems without introducing additional approximations. Finally, the transmission coefficient of a ring-shaped multiterminal system and the transmission function of a square-lattice nanoribbon with a reduced width region are calculated.

  12. Current status of computational methods for transonic unsteady aerodynamics and aeroelastic applications

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Malone, John B.

    1992-01-01

    The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.

  13. Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector

    NASA Astrophysics Data System (ADS)

    Wang, Hongbin; Feng, Yinhan; Cheng, Liang

    2018-03-01

    Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.

  14. Measurement of the WW + WZ production cross section using the lepton + jets final state at CDF II.

    PubMed

    Aaltonen, T; Adelman, J; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; d'Errico, M; Di Canto, A; di Giovanni, G P; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Dube, S; Ebina, K; Elagin, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Hughes, R E; Hurwitz, M; Husemann, U; Hussein, M; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Lovas, L; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramanov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Rutherford, B; Saarikko, H; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Simonenko, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Suh, J S; Sukhanov, A; Suslov, I; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wolfe, H; Wright, T; Wu, X; Würthwein, F; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhang, X; Zheng, Y; Zucchelli, S

    2010-03-12

    We report two complementary measurements of the WW + WZ cross section in the final state consisting of an electron or muon, missing transverse energy, and jets, performed using pp collision data at square root of s = 1.96 TeV collected by the CDF II detector. The first method uses the dijet invariant mass distribution while the second more sensitive method uses matrix-element calculations. The result from the second method has a signal significance of 5.4sigma and is the first observation of WW + WZ production using this signature. Combining the results gives sigma(WW + WZ) = 16.0 +/- 3.3 pb, in agreement with the standard model prediction.

  15. First principles statistical mechanics of alloys and magnetism

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai

    Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.

  16. a New Method for Calculating the Fractal Dimension of Surface Topography

    NASA Astrophysics Data System (ADS)

    Zuo, Xue; Zhu, Hua; Zhou, Yuankai; Li, Yan

    2015-06-01

    A new method termed as three-dimensional root-mean-square (3D-RMS) method, is proposed to calculate the fractal dimension (FD) of machined surfaces. The measure of this method is the root-mean-square value of surface data, and the scale is the side length of square in the projection plane. In order to evaluate the calculation accuracy of the proposed method, the isotropic surfaces with deterministic FD are generated based on the fractional Brownian function and Weierstrass-Mandelbrot (WM) fractal function, and two kinds of anisotropic surfaces are generated by stretching or rotating a WM fractal curve. Their FDs are estimated by the proposed method, as well as differential boxing-counting (DBC) method, triangular prism surface area (TPSA) method and variation method (VM). The results show that the 3D-RMS method performs better than the other methods with a lower relative error for both isotropic and anisotropic surfaces, especially for the surfaces with dimensions higher than 2.5, since the relative error between the estimated value and its theoretical value decreases with theoretical FD. Finally, the electrodeposited surface, end-turning surface and grinding surface are chosen as examples to illustrate the application of 3D-RMS method on the real machined surfaces. This method gives a new way to accurately calculate the FD from the surface topographic data.

  17. Target deception jamming method against spaceborne synthetic aperture radar using electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Sun, Qingyang; Shu, Ting; Tang, Bin; Yu, Wenxian

    2018-01-01

    A method is proposed to perform target deception jamming against spaceborne synthetic aperture radar. Compared with the traditional jamming methods using deception templates to cover the target or region of interest, the proposed method aims to generate a verisimilar deceptive target in various attitude with high fidelity using the electromagnetic (EM) scattering. Based on the geometrical model for target deception jamming, the EM scattering data from the deceptive target was first simulated by applying an EM calculation software. Then, the proposed jamming frequency response (JFR) is calculated offline by further processing. Finally, the deception jamming is achieved in real time by a multiplication between the proposed JFR and the spectrum of intercepted radar signals. The practical implementation is presented. The simulation results prove the validity of the proposed method.

  18. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  19. Structures in solutions from joint experimental-computational analysis: applications to cyclic molecules and studies of noncovalent interactions.

    PubMed

    Aliev, Abil E; Mia, Zakirin A; Khaneja, Harmeet S; King, Frank D

    2012-01-26

    The potential of an approach combining nuclear magnetic resonance (NMR) spectroscopy, molecular dynamics (MD) simulations, and quantum mechanical (QM) calculations for full structural characterizations in solution is assessed using cyclic organic compounds, namely, benzazocinone derivatives 1-3 with fused five- and eight-membered aliphatic rings, camphoric anhydride 4, and bullvalene 5. Various MD simulations were considered, using force field and semiempirical QM treatments, implicit and explicit solvation, and high-temperature MD calculations for selecting plausible molecular geometries for subsequent QM geometry optimizations using mainly B3LYP, M062X, and MP2 methods. The QM-predicted values of NMR parameters were compared to their experimental values for verification of the final structures derived from the MD/QM analysis. From these comparisons, initial estimates of quality thresholds (calculated as rms deviations) were 0.7-0.9 Hz for (3)J(HH) couplings, 0.07-0.11 Å for interproton distances, 0.05-0.08 ppm for (1)H chemical shifts, and 1.0-2.1 ppm for (13)C chemical shifts. The obtained results suggest that the accuracy of the MD analysis in predicting geometries and relative conformational energies is not critical and that the final geometry refinements of the structures selected from the MD simulations using QM methods are sufficient for correcting for the expected inaccuracy of the MD analysis. A unique example of C(sp(3))-H···N(sp(3)) intramolecular noncovalent interaction is also identified using the NMR/MD/QM and the natural bond orbital analyses. As the NMR/MD/QM approach relies on the final QM geometry optimization, comparisons of geometric characteristics predicted by different QM methods and those from X-ray and neutron diffraction measurements were undertaken using rigid and flexible cyclic systems. The joint analysis shows that intermolecular noncovalent interactions present in the solid state alter molecular geometries significantly compared to the geometries of isolated molecules from QM calculations.

  20. The orbital evolution of the Apollo asteroid group over 11,550 years

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    1992-08-01

    The Everhard method was used to monitor the orbital evolution of 20 Apollo asteroids in the time interval from 2250 A.D. to 9300 B.C. The closest encounters with large planets in the evolution process are calculated. Stable resonances with Venus and Earth over the period from 2250 A.D. to 9300 B.C. are obtained. Theoretical coordinates of radiants on initial and final moments of integration are calculated.

  1. Threshold of transverse mode coupling instability with arbitrary space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balbekov, V.

    The threshold of the transverse mode coupling instability is calculated in framework of the square well model at arbitrary value of space charge tune shift. A new method of calculation is developed beyond the traditional expansion technique. The square, resistive, and exponential wakes are investigated. It is shown that the instability threshold goes up indefinitely when the tune shift increases. Finally, a comparison with conventional case of the parabolic potential well is performed.

  2. Threshold of transverse mode coupling instability with arbitrary space charge

    DOE PAGES

    Balbekov, V.

    2017-11-30

    The threshold of the transverse mode coupling instability is calculated in framework of the square well model at arbitrary value of space charge tune shift. A new method of calculation is developed beyond the traditional expansion technique. The square, resistive, and exponential wakes are investigated. It is shown that the instability threshold goes up indefinitely when the tune shift increases. Finally, a comparison with conventional case of the parabolic potential well is performed.

  3. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  4. Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints

    NASA Astrophysics Data System (ADS)

    Prince, E. R.; Carr, R. W.; Cobb, R. G.

    This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.

  5. Final state interactions and the transverse structure of the pion using non-perturbative eikonal methods

    DOE PAGES

    Gamberg, Leonard; Schlegel, Marc

    2010-01-18

    In the factorized picture of semi-inclusive hadronic processes the naive time reversal-odd parton distributions exist by virtue of the gauge link which renders it color gauge invariant. The link characterizes the dynamical effect of initial/final-state interactions of the active parton due soft gluon exchanges with the target remnant. Though these interactions are non-perturbative, studies of final-state interaction have been approximated by perturbative one-gluon approximation in Abelian models. We include higher-order contributions by applying non-perturbative eikonal methods incorporating color degrees of freedom in a calculation of the Boer-Mulders function of the pion. Lastly, using this framework we explore under what conditionsmore » the Boer Mulders function can be described in terms of factorization of final state interactions and a spatial distribution in impact parameter space.« less

  6. La Terra: esperimenti a scuola

    NASA Astrophysics Data System (ADS)

    Roselli, Alessandra; D'Amico, Angelalucia; Pisegna, Daniela; Palma, Francesco; di Nardo, Giustino; Cofini, Marika; Cerasani, Paolo; Cerratti, Valentina

    2006-02-01

    Easy but effective methods used in past centuries allow rediscovery and good knowledge of the planet Earth. The latitude station and planetary radius were measured with Eratosthenes method. The gravity acceleration obtained from pendulum period was used to calculate the terrestrial mass and the density of internal planetary layers. Finally, the estimate of atmosphere density and geometrical thickness complete the view of the planet's properties.

  7. An assessment of alternatives to the Eichleay formula for unabsorbed overhead in delay claims : final report.

    DOT National Transportation Integrated Search

    1988-01-01

    This report presents an assessment of the accuracy of alternatives available to calculate unabsorbed overhead in construction delay claims submitted by contractors. It reviews the alternatives available, concludes that the Eichleay method, used by ma...

  8. The Application of Selected Network Methods for Reliable and Safe Transport by Small Commercial Vehicles

    NASA Astrophysics Data System (ADS)

    Matuszak, Zbigniew; Bartosz, Michał; Barta, Dalibor

    2016-09-01

    In the article are characterized two network methods (critical path method - CPM and program evaluation and review technique - PERT). On the example of an international furniture company's product, it presented the exemplification of methods to transport cargos (furniture elements). Moreover, the study showed diagrams for transportation of cargos from individual components' producers to the final destination - the showroom. Calculations were based on the transportation of furniture elements via small commercial vehicles.

  9. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  10. Milestoning with coarse memory

    NASA Astrophysics Data System (ADS)

    Hawk, Alexander T.

    2013-04-01

    Milestoning is a method used to calculate the kinetics of molecular processes occurring on timescales inaccessible to traditional molecular dynamics (MD) simulations. In the method, the phase space of the system is partitioned by milestones (hypersurfaces), trajectories are initialized on each milestone, and short MD simulations are performed to calculate transitions between neighboring milestones. Long trajectories of the system are then reconstructed with a semi-Markov process from the observed statistics of transition. The procedure is typically justified by the assumption that trajectories lose memory between crossing successive milestones. Here we present Milestoning with Coarse Memory (MCM), a generalization of Milestoning that relaxes the memory loss assumption of conventional Milestoning. In the method, milestones are defined and sample transitions are calculated in the standard Milestoning way. Then, after it is clear where trajectories sample milestones, the milestones are broken up into distinct neighborhoods (clusters), and each sample transition is associated with two clusters: the cluster containing the coordinates the trajectory was initialized in, and the cluster (on the terminal milestone) containing trajectory's final coordinates. Long trajectories of the system are then reconstructed with a semi-Markov process in an extended state space built from milestone and cluster indices. To test the method, we apply it to a process that is particularly ill suited for Milestoning: the dynamics of a polymer confined to a narrow cylinder. We show that Milestoning calculations of both the mean first passage time and the mean transit time of reversal—which occurs when the end-to-end vector reverses direction—are significantly improved when MCM is applied. Finally, we note the overhead of performing MCM on top of conventional Milestoning is negligible.

  11. Comparison of cross sections from the quasi-classical trajectory method and the j(z)-conserving centrifugal sudden approximation with accurate quantum results for an atom-rigid nonlinear polyatomic collision

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.

    1993-01-01

    We report the results of a series of calculations of state-to-state integral cross sections for collisions between O and nonvibrating H2O in the gas phase on a model nonreactive potential energy surface. The dynamical methods used include converged quantum mechanical scattering calculations, the j(z) conserving centrifugal sudden (j(z)-CCS) approximation, and quasi-classical trajectory (QCT) calculations. We consider three total energies 0.001, 0.002, and 0.005 E(h) and the nine initial states with rotational angular momentum less than or equal to 2 (h/2 pi). The j(z)-CCS approximation gives good results, while the QCT method can be quite unreliable for transitions to specific rotational sublevels. However, the QCT cross sections summed over final sublevels and averaged over initial sublevels are in better agreement with the quantum results.

  12. Incorporating Linear Synchronous Transit Interpolation into the Growing String Method: Algorithm and Applications.

    PubMed

    Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin

    2011-12-13

    The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).

  13. Analysis of the dynamics of movement of the landing vehicle with an inflatable braking device on the final trajectory under the influence of wind load

    NASA Astrophysics Data System (ADS)

    Koryanov, V.; Kazakovtsev, V.; Harri, A.-M.; Heilimo, J.; Haukka, H.; Aleksashkin, S.

    2015-10-01

    This research work is devoted to analysis of angular motion of the landing vehicle (LV) with an inflatable braking device (IBD), taking into account the influence of the wind load on the final stage of the movement. Using methods to perform a calculation of parameters of angular motion of the landing vehicle with an inflatable braking device based on the availability of small asymmetries, which are capable of complex dynamic phenomena, analyzes motion of the landing vehicle at the final stage of motion in the atmosphere.

  14. Electronic propensity rules in Li-H+ collisions involving initial and/or final oriented states

    NASA Astrophysics Data System (ADS)

    Salas, P. J.

    2000-12-01

    Electronic excitation and capture processes are studied in collisions involving systems with only one active electron such as the alkaline (Li)-proton in the medium-energy region (0.1-15 keV). Using the semiclassical impact parameter method, the probabilities and the orientation parameter are calculated for transitions between initial and/or final oriented states. The results show a strong asymmetry in the probabilities depending on the orientation of the initial and/or final states. An intuitive view of the processes, by means of the concepts of propensity and velocity matching rules, is provided.

  15. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  16. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  17. Prediction of core level binding energies in density functional theory: Rigorous definition of initial and final state contributions and implications on the physical meaning of Kohn-Sham energies.

    PubMed

    Pueyo Bellafont, Noèlia; Bagus, Paul S; Illas, Francesc

    2015-06-07

    A systematic study of the N(1s) core level binding energies (BE's) in a broad series of molecules is presented employing Hartree-Fock (HF) and the B3LYP, PBE0, and LC-BPBE density functional theory (DFT) based methods with a near HF basis set. The results show that all these methods give reasonably accurate BE's with B3LYP being slightly better than HF but with both PBE0 and LCBPBE being poorer than HF. A rigorous and general decomposition of core level binding energy values into initial and final state contributions to the BE's is proposed that can be used within either HF or DFT methods. The results show that Koopmans' theorem does not hold for the Kohn-Sham eigenvalues. Consequently, Kohn-Sham orbital energies of core orbitals do not provide estimates of the initial state contribution to core level BE's; hence, they cannot be used to decompose initial and final state contributions to BE's. However, when the initial state contribution to DFT BE's is properly defined, the decompositions of initial and final state contributions given by DFT, with several different functionals, are very similar to those obtained with HF. Furthermore, it is shown that the differences of Kohn-Sham orbital energies taken with respect to a common reference do follow the trend of the properly calculated initial state contributions. These conclusions are especially important for condensed phase systems where our results validate the use of band structure calculations to determine initial state contributions to BE shifts.

  18. Predicting missing links via correlation between nodes

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhang, Yi-Cheng

    2015-10-01

    As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.

  19. Approach to optimization of low-power Stirling cryocoolers. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.B.; Radebaugh, R.; Daney, D.E.

    1983-01-01

    The authors describe a method for optimizing the design (shape of the displacer) of low-power Stirling cryocoolers relative to the power required to operate the systems. A variational calculation which includes static conduction, shuttle, and radiation losses, as well as regenerator inefficiency, has been completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped-displacer cryocoolers indicates reasonable agreement.

  20. Stabilization of time domain acoustic boundary element method for the exterior problem avoiding the nonuniqueness.

    PubMed

    Jang, Hae-Won; Ih, Jeong-Guon

    2013-03-01

    The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.

  1. Collision-induced absorption with exchange effects and anisotropic interactions: theory and application to H2 - H2.

    PubMed

    Karman, Tijs; van der Avoird, Ad; Groenenboom, Gerrit C

    2015-02-28

    We discuss three quantum mechanical formalisms for calculating collision-induced absorption spectra. First, we revisit the established theory of collision-induced absorption, assuming distinguishable molecules which interact isotropically. Then, the theory is rederived incorporating exchange effects between indistinguishable molecules. It is shown that the spectrum can no longer be written as an incoherent sum of the contributions of the different spherical components of the dipole moment. Finally, we derive an efficient method to include the effects of anisotropic interactions in the computation of the absorption spectrum. This method calculates the dipole coupling on-the-fly, which allows for the uncoupled treatment of the initial and final states without the explicit reconstruction of the many-component wave functions. The three formalisms are applied to the collision-induced rotation-translation spectra of hydrogen molecules in the far-infrared. Good agreement with experimental data is obtained. Significant effects of anisotropic interactions are observed in the far wing.

  2. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  3. Total Longitudinal Moment Calculation and Reliability Analysis of Yacht Structures

    NASA Astrophysics Data System (ADS)

    Zhi, Wenzheng; Lin, Shaofen

    In order to check the reliability of the yacht in FRP (Fiber Reinforce Plastic) materials, in this paper, the vertical force and the calculation method of the overall longitudinal bending moment on yacht was analyzed. Specially, this paper focuses on the impact of speed on the still water bending moment on yacht. Then considering the mechanical properties of the cap type stiffeners in composite materials, the ultimate bearing capacity of the yacht has been worked out, finally the reliability of the yacht was calculated with using response surface methodology. The result can be used in yacht design and yacht driving.

  4. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  5. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  6. Slope Stability Analysis of Waste Dump in Sandstone Open Pit Osielec

    NASA Astrophysics Data System (ADS)

    Adamczyk, Justyna; Cała, Marek; Flisiak, Jerzy; Kolano, Malwina; Kowalski, Michał

    2013-03-01

    This paper presents the slope stability analysis for the current as well as projected (final) geometry of waste dump Sandstone Open Pit "Osielec". For the stability analysis six sections were selected. Then, the final geometry of the waste dump was designed and the stability analysis was conducted. On the basis of the analysis results the opportunities to improve the stability of the object were identified. The next issue addressed in the paper was to determine the proportion of the mixture containing mining and processing wastes, for which the waste dump remains stable. Stability calculations were carried out using Janbu method, which belongs to the limit equilibrium methods.

  7. Genetic Interaction Score (S-Score) Calculation, Clustering, and Visualization of Genetic Interaction Profiles for Yeast.

    PubMed

    Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan

    2018-02-01

    This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.

  8. Effective emissivities of isothermal blackbody cavities calculated by the Monte Carlo method using the three-component bidirectional reflectance distribution function model.

    PubMed

    Prokhorov, Alexander

    2012-05-01

    This paper proposes a three-component bidirectional reflectance distribution function (3C BRDF) model consisting of diffuse, quasi-specular, and glossy components for calculation of effective emissivities of blackbody cavities and then investigates the properties of the new reflection model. The particle swarm optimization method is applied for fitting a 3C BRDF model to measured BRDFs. The model is incorporated into the Monte Carlo ray-tracing algorithm for isothermal cavities. Finally, the paper compares the results obtained using the 3C model and the conventional specular-diffuse model of reflection.

  9. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  10. Applications of Laplace transform methods to airfoil motion and stability calculations

    NASA Technical Reports Server (NTRS)

    Edwards, J. W.

    1979-01-01

    This paper reviews the development of generalized unsteady aerodynamic theory and presents a derivation of the generalized Possio integral equation. Numerical calculations resolve questions concerning subsonic indicial lift functions and demonstrate the generation of Kutta waves at high values of reduced frequency, subsonic Mach number, or both. The use of rational function approximations of unsteady aerodynamic loads in aeroelastic stability calculations is reviewed, and a reformulation of the matrix Pade approximation technique is given. Numerical examples of flutter boundary calculations for a wing which is to be flight tested are given. Finally, a simplified aerodynamic model of transonic flow is used to study the stability of an airfoil exposed to supersonic and subsonic flow regions.

  11. Taboo Search: An Approach to the Multiple Minima Problem

    NASA Astrophysics Data System (ADS)

    Cvijovic, Djurdje; Klinowski, Jacek

    1995-02-01

    Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.

  12. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  13. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  14. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  15. Calculation of the Local Free Energy Landscape in the Restricted Region by the Modified Tomographic Method.

    PubMed

    Chen, Changjun

    2016-03-31

    The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.

  16. Faddeev calculations of. pi. D scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, A.W.

    1976-01-01

    The present status of the Faddeev calculations of ..pi..D scattering is summarized, with emphasis on what has been learned about common approximation methods (for ..pi..-nucleus as well as ..pi..D). Some space is devoted to a discussion of the theoretical work which remains, including a suggestion of co-operation between theorists on a ''homework'' problem. Finally, examples of the interesting phenomena are given which one hopes to investigate through good ..pi..D experiments. Suggestions are made as to which experiments would be most useful.

  17. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  18. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  19. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  20. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    NASA Astrophysics Data System (ADS)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  1. Comparison of quantitatively analyzed dynamic area-detector CT using various mathematic methods with FDG PET/CT in management of solitary pulmonary nodules.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro

    2013-06-01

    The objective of our study was to prospectively compare the capability of dynamic area-detector CT analyzed with different mathematic methods and PET/CT in the management of pulmonary nodules. Fifty-two consecutive patients with 96 pulmonary nodules underwent dynamic area-detector CT, PET/CT, and microbacterial or pathologic examinations. All nodules were classified into the following groups: malignant nodules (n = 57), benign nodules with low biologic activity (n = 15), and benign nodules with high biologic activity (n = 24). On dynamic area-detector CT, the total, pulmonary arterial, and systemic arterial perfusions were calculated using the dual-input maximum slope method; perfusion was calculated using the single-input maximum slope method; and extraction fraction and blood volume (BV) were calculated using the Patlak plot method. All indexes were statistically compared among the three nodule groups. Then, receiver operating characteristic analyses were used to compare the diagnostic capabilities of the maximum standardized uptake value (SUVmax) and each perfusion parameter having a significant difference between malignant and benign nodules. Finally, the diagnostic performances of the indexes were compared by means of the McNemar test. No adverse effects were observed in this study. All indexes except extraction fraction and BV, both of which were calculated using the Patlak plot method, showed significant differences among the three groups (p < 0.05). Areas under the curve of total perfusion calculated using the dual-input method, pulmonary arterial perfusion calculated using the dual-input method, and perfusion calculated using the single-input method were significantly larger than that of SUVmax (p < 0.05). The accuracy of total perfusion (83.3%) was significantly greater than the accuracy of the other indexes: pulmonary arterial perfusion (72.9%, p < 0.05), systemic arterial perfusion calculated using the dual-input method (69.8%, p < 0.05), perfusion (66.7%, p < 0.05), and SUVmax (60.4%, p < 0.05). Dynamic area-detector CT analyzed using the dual-input maximum slope method has better potential for the diagnosis of pulmonary nodules than dynamic area-detector CT analyzed using other methods and than PET/CT.

  2. 14 CFR 415.204-415.400 - [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Subsystem Design Information 10.4Flight Safety System Analyses 10.5Flight Termination System Environmental... Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where...

  3. 14 CFR 415.204-415.400 - [Reserved

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Subsystem Design Information 10.4Flight Safety System Analyses 10.5Flight Termination System Environmental... Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where...

  4. 14 CFR 415.204-415.400 - [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Subsystem Design Information 10.4Flight Safety System Analyses 10.5Flight Termination System Environmental... Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where...

  5. 47 CFR 73.1820 - Station log.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... values): (A) Common point current. (B) When the operating power is determined by the indirect method, the efficiency factor F and either the product of the final amplifier input voltage and current or the calculated antenna input power. See § 73.51(e). (C) Antenna monitor phase or phase deviation indications. (D) Antenna...

  6. Analysis of Costs and Benefits in Rehabilitation. Final Report.

    ERIC Educational Resources Information Center

    Berkowitz, Monroe, Ed.; And Others

    This report suggests feasible alternatives to the present methods of calculating benefits and costs of the joint federal-state vocational rehabilitation program. "Summary and Guide to Reading This Report" (Monroe Berkowitz) appears first. Part I, Background, Theory and Models, includes "The Cost Benefit Tradition in Vocational Rehabilitation"…

  7. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  8. Considerations on methodological challenges for water footprint calculations.

    PubMed

    Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R

    2012-01-01

    We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.

  9. Interactive numerical flow visualization using stream surfaces

    NASA Technical Reports Server (NTRS)

    Hultquist, J. P. M.

    1990-01-01

    Particle traces and ribbons are often used to depict the structure of three-dimensional flowfields, but images produced using these models can be ambiguous. Stream surfaces offer a more visually intuitive method for the depiction of flowfields, but interactive response is needed to allow the user to place surfaces which reveal the essential features of a given flowfield. FLORA, a software package which supports the interactive calculation and display of stream surfaces on silicon graphics workstations, is described. Alternative methods for the integration of particle traces are examined, and calculation through computational space is found to provide rapid results with accuracy adequate for most purposes. Rapid calculation of traces is teamed with progressive refinement of appoximated surfaces. An initial approximation provides immediate user feedback, and subsequent improvement of the surface ensures that the final image is an accurate representation of the flowfield.

  10. An auxiliary-field quantum Monte Carlo study of the chromium dimer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purwanto, Wirawan, E-mail: wirawan0@gmail.com; Zhang, Shiwei; Krakauer, Henry

    2015-02-14

    The chromium dimer (Cr{sub 2}) presents an outstanding challenge for many-body electronic structure methods. Its complicated nature of binding, with a formal sextuple bond and an unusual potential energy curve (PEC), is emblematic of the competing tendencies and delicate balance found in many strongly correlated materials. We present an accurate calculation of the PEC and ground state properties of Cr{sub 2}, using the auxiliary-field quantum Monte Carlo (AFQMC) method. Unconstrained, exact AFQMC calculations are first carried out for a medium-sized but realistic basis set. Elimination of the remaining finite-basis errors and extrapolation to the complete basis set limit are thenmore » achieved with a combination of phaseless and exact AFQMC calculations. Final results for the PEC and spectroscopic constants are in excellent agreement with experiment.« less

  11. A novel method for deriving thresholds of toxicological concern for vaccine constituents.

    PubMed

    White, Jennifer; Wrzesinski, Claudia; Green, Martin; Johnson, Giffe T; McCluskey, James D; Abritis, Alison; Harbison, Raymond D

    2016-05-01

    Safety assessment evaluating the presence of impurities, residual materials, and contaminants in vaccines is a focus of current research. Thresholds of toxicological concern (TTCs) are mathematically modeled levels used for assessing the safety of many food and medication constituents. In this study, six algorithms are selected from the open-access ToxTree software program to derive a method for calculating TTCs for vaccine constituents: In Vivo Rodent Micronucleus assay/LD50, Benigni-Bossa/LD50, Cramer Extended/LD50, In Vivo Rodent Micronucleus assay/TDLo, Benigni-Bossa/TDLo, and the Cramer Extended/TDLo. Using an initial dataset (n = 197) taken from INCHEM, RepDose, RTECS, and TOXNET, the chemicals were divided into two families: "positive" - based on the presence of structures associated with adverse outcomes, or "negative" - no such structures or having structures that appear to be protective of health. The final validation indicated that the Benigni-Bossa/LD50 method is the most appropriate for calculating TTCs for vaccine constituents. Final TTCs were designated as 18.06 μg/person and 20.61 μg/person for the Benigni-Bossa/LD50 positive and negative structural families, respectively.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  13. Double Photoionization of helium atom using Screening Potential Approach

    NASA Astrophysics Data System (ADS)

    Saha, Haripada

    2014-05-01

    The triple differential cross section for double Photoionization of helium atom will be investigated using our recently extended MCHF method. It is well known that electron correlation effects in both the initial and the final states are very important. To incorporate these effects we will use the multi-configuration Hartree-Fock method to account for electron correlation in the initial state. The electron correlation in the final state will be taken into account using the angle-dependent screening potential approximation. The triple differential cross section (TDCS) will be calculated for 20 eV photon energy, which has experimental results. Our results will be compared with available experimental and the theoretical observations.

  14. The Role of Economic Uncertainty on the Block Economic Value - a New Valuation Approach / Rola Czynnika Niepewności Przy Obliczaniu Wskaźnika Rentowności - Nowe Podejście

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Ataee-Pour, M.

    2012-12-01

    The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.

  15. An inverse method using toroidal mode data

    USGS Publications Warehouse

    Willis, C.

    1986-01-01

    The author presents a numerical method for calculating the density and S-wave velocity in the upper mantle of a spherically symmetric, non-rotating Earth which consists of a perfect elastic, isotropic material. The data comes from the periods of the toroidal oscillations. She tests the method on a smoothed version of model A. The error in the reconstruction is less than 1%. The effects of perturbations in the eigenvalues are studied and she finds that the final model is sensitive to errors in the data.

  16. Effective visibility analysis method in virtual geographic environment

    NASA Astrophysics Data System (ADS)

    Li, Yi; Zhu, Qing; Gong, Jianhua

    2008-10-01

    Visibility analysis in virtual geographic environment has broad applications in many aspects in social life. But in practical use it is urged to improve the efficiency and accuracy, as well as to consider human vision restriction. The paper firstly introduces a high-efficient 3D data modeling method, which generates and organizes 3D data model using R-tree and LOD techniques. Then a new visibility algorithm which can realize real-time viewshed calculation considering the shelter of DEM and 3D building models and some restrictions of human eye to the viewshed generation. Finally an experiment is conducted to prove the visibility analysis calculation quickly and accurately which can meet the demand of digital city applications.

  17. A numerical study of mobility in thin films of fullerene derivatives.

    PubMed

    Mackenzie, Roderick C I; Frost, Jarvist M; Nelson, Jenny

    2010-02-14

    The effect of functional group size on the electron mobility in films of fullerene derivatives is investigated numerically. A series of four C(60) derivatives are formed by attaching saturated hydrocarbon chains to the C(60) cage via a methano bridge. For each of the derivatives investigated, molecular dynamics is used to generate a realistic material morphology. Quantum chemical methods are then used to calculate intermolecular charge transfer rates. Finally, Monte Carlo methods are used to simulate time-of-flight experiments and thus calculate the electron mobility. It is found that as the length of the aliphatic side chain increases, the configurational disorder increases and thus the mobility decreases.

  18. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  19. Three-dimensional modeling of tea-shoots using images and models.

    PubMed

    Wang, Jian; Zeng, Xianyin; Liu, Jianbing

    2011-01-01

    In this paper, a method for three-dimensional modeling of tea-shoots with images and calculation models is introduced. The process is as follows: the tea shoots are photographed with a camera, color space conversion is conducted, using an improved algorithm that is based on color and regional growth to divide the tea shoots in the images, and the edges of the tea shoots extracted with the help of edge detection; after that, using the divided tea-shoot images, the three-dimensional coordinates of the tea shoots are worked out and the feature parameters extracted, matching and calculation conducted according to the model database, and finally the three-dimensional modeling of tea-shoots is completed. According to the experimental results, this method can avoid a lot of calculations and has better visual effects and, moreover, performs better in recovering the three-dimensional information of the tea shoots, thereby providing a new method for monitoring the growth of and non-destructive testing of tea shoots.

  20. Calculation of Water Entry Problem for Free-falling Bodies Using a Developed Cartesian Cut Cell Mesh

    NASA Astrophysics Data System (ADS)

    Wenhua, Wang; Yanying, Wang

    2010-05-01

    This paper describes the development of free surface capturing method on Cartesian cut cell mesh to water entry problem for free-falling bodies with body-fluid interaction. The incompressible Euler equations for a variable density fluid system are presented as governing equations and the free surface is treated as a contact discontinuity by using free surface capturing method. In order to be convenient for dealing with the problem with moving body boundary, the Cartesian cut cell technique is adopted for generating the boundary-fitted mesh around body edge by cutting solid regions out of a background Cartesian mesh. Based on this mesh system, governing equations are discretized by finite volume method, and at each cell edge inviscid flux is evaluated by means of Roe's approximate Riemann solver. Furthermore, for unsteady calculation in time domain, a time accurate solution is achieved by a dual time-stepping technique with artificial compressibility method. For the body-fluid interaction, the projection method of momentum equations and exact Riemann solution are applied in the calculation of fluid pressure on the solid boundary. Finally, the method is validated by test case of water entry for free-falling bodies.

  1. Application of hyperspherical harmonics expansion method to the low-lying bound S-states of exotic two-muon three-body systems

    NASA Astrophysics Data System (ADS)

    Khan, Md. Abdul

    2014-09-01

    In this paper, energies of the low-lying bound S-states (L = 0) of exotic three-body systems, consisting a nuclear core of charge +Ze (Z being atomic number of the core) and two negatively charged valence muons, have been calculated by hyperspherical harmonics expansion method (HHEM). The three-body Schrödinger equation is solved assuming purely Coulomb interaction among the binary pairs of the three-body systems XZ+μ-μ- for Z = 1 to 54. Convergence pattern of the energies have been checked with respect to the increasing number of partial waves Λmax. For available computer facilities, calculations are feasible up to Λmax = 28 partial waves, however, calculation for still higher partial waves have been achieved through an appropriate extrapolation scheme. The dependence of bound state energies has been checked against increasing nuclear charge Z and finally, the calculated energies have been compared with the ones of the literature.

  2. Impacto de Dos Métodos Alternativos de Asignación de Costos Indirectos Estructurales de Hospitales Públicos Chilenos en el Costo Final de Producción de Servicios Sanitarios.

    PubMed

    Luis Roberto, Reveco Sepúlveda; Carlos Alberto, Vallejos Vallejos; Patricio Reinaldo, Valdes Garcia; Herenia Gutiérrez Ponce

    2012-12-01

    The main goal of this study is to measure the impact of two alternative methods of overhead cost allocation of chilean public hospitals into the final production cost of 256 health care services which are recurrent in health problems whose burden of disease is high in Chile. A purposively sample of six important hospitals of metropolitan region in Chile was considered. A survey was applied to them in order to collect analytic cost data of resource use (labor, medical supplies and use of capital) in the production of health care services. The data of overhead cost (electricity, central heating, laundry, administrative support, transport, maintenance, etc.) were obtained from the Information System of each hospital. The final cost of each health care service was calculated from the perspective of health public system, in two ways: (1) using a proxy rate of common use, and (2) using overhead cost rates as a result of a step-down methodology. The final costs calculated with each method were compared and analized. Considering that the gold standard method for allocation of overhead cost is the step-down methodology, the results using proxy rate revealed that 185 services (72,3%) are under costing, and 71 health care services (27,7%) are over costing. The use of proxy rates to allocate overhead costs into the final cost lead to important under costing and over costing of health services. This finding is important at least by two reasons: (1) for the management of hospitals, (2) in economic evaluations, the variations in cost can modify the ratio of cost-effectiveness, cost-utility or cost-benefit, influencing the health public decision. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. SOLAR OPACITY CALCULATIONS USING THE SUPER-TRANSITION-ARRAY METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krief, M.; Feigel, A.; Gazit, D., E-mail: menahem.krief@mail.huji.ac.il

    A new opacity model has been developed based on the Super-Transition-Array (STA) method for the calculation of monochromatic opacities of plasmas in local thermodynamic equilibrium. The atomic code, named STAR (STA-Revised), is described and used to calculate spectral opacities for a solar model implementing the recent AGSS09 composition. Calculations are carried out throughout the solar radiative zone. The relative contributions of different chemical elements and atomic processes to the total Rosseland mean opacity are analyzed in detail. Monochromatic opacities and charge-state distributions are compared with the widely used Opacity Project (OP) code, for several elements near the radiation–convection interface. STARmore » Rosseland opacities for the solar mixture show a very good agreement with OP and the OPAL opacity code throughout the radiation zone. Finally, an explicit STA calculation was performed of the full AGSS09 photospheric mixture, including all heavy metals. It was shown that, due to their extremely low abundance, and despite being very good photon absorbers, the heavy elements do not affect the Rosseland opacity.« less

  4. Pattern of mathematic representation ability in magnetic electricity problem

    NASA Astrophysics Data System (ADS)

    Hau, R. R. H.; Marwoto, P.; Putra, N. M. D.

    2018-03-01

    The mathematic representation ability in solving magnetic electricity problem gives information about the way students understand magnetic electricity. Students have varied mathematic representation pattern ability in solving magnetic electricity problem. This study aims to determine the pattern of students' mathematic representation ability in solving magnet electrical problems.The research method used is qualitative. The subject of this study is the fourth semester students of UNNES Physics Education Study Program. The data collection is done by giving a description test that refers to the test of mathematical representation ability and interview about field line topic and Gauss law. The result of data analysis of student's mathematical representation ability in solving magnet electric problem is categorized into high, medium and low category. The ability of mathematical representations in the high category tends to use a pattern of making known and asked symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representation in the medium category tends to use several patterns of writing the known symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representations in the low category tends to use several patterns of making known symbols, writing equations, substituting quantities into equations, performing calculations and final answer.

  5. Effect of Loop Geometry on TEM Response Over Layered Earth

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Fang, Guangyou; Yu, Gang

    2014-09-01

    A large horizontal loop located on the ground or carried by an aircraft are the most common sources of the transient electromagnetic method. Although topographical factors or airplane outlines make the loop of arbitrary shape, magnetic sources are generally represented as a magnetic dipole or a circular loop, which may bring about significant errors in the calculated response. In this paper, we present a method for calculating the response of a loop of arbitrary shape (for which the description can be obtained by different methods, including GPS localization) in air or on the surface of a stratified earth. The principle of reciprocity is firstly used to exchange the functions of the transmitting loop and the dipole receiver, then the response of a vertical or a horizontal magnetic dipole is calculated beforehand, and finally the line integral of the second kind is employed to get the transient response. Analytical analysis and comparisons depict that our work got very good results in many situations. Synthetic and field examples are given in the end to show the effect of loop geometry and how our method improves the precision of the EM response.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  7. Surface shift of the occupied and unoccupied 4f levels of the rare-earth metals

    NASA Astrophysics Data System (ADS)

    Aldén, M.; Johansson, B.; Skriver, H. L.

    1995-02-01

    The surface energy shifts of the occupied and unoccupied 4f levels for the lanthanide metals have been calculated from first principles by means of a Green's-function technique within the tight-binding linear muffin-tin orbitals method. We use the concept of complete screening to identify the occupied and unoccupied 4f energy level shifts as the surface segregation energy of a 4fn-1 and 4fn+1 impurity atom, respectively, in a 4fn host metal. The calculations include both initial- and final-state effects and give values that are considerably lower than those measured on polycrystalline samples as well as those found in previous initial-state model calculations. The present theory agrees well with very recent high-resolution, single-crystal film measurements for Gd, Tb, Dy, Ho, Er, Tm, and Lu. We furthermore utilize the unique possibility offered by the lanthanide metals to clarify the roles played by the initial and the different final states of the core-excitation process, permitted by the fact that the so-called initial-state effect is identical upon 4f removal and 4f addition. Surface energy and work function calculations are also reported.

  8. Age-dependent biochemical quantities: an approach for calculating reference intervals.

    PubMed

    Bjerner, J

    2007-01-01

    A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.

  9. 78 FR 68161 - Greenhouse Gas Reporting Program: Final Amendments and Confidentiality Determinations for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... measurements corrected for temperature and non-ideal gas behavior). For gases with low volume consumption for... effect of that abatement system when using either the emission factors and calculation methods in 40 CFR...) basis. To develop the preliminary estimate, the reporter must use the gas consumption in the tools...

  10. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  11. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  12. How realistic is the pore size distribution calculated from adsorption isotherms if activated carbon is composed of fullerene-like fragments?

    PubMed

    Terzyk, Artur P; Furmaniak, Sylwester; Harris, Peter J F; Gauden, Piotr A; Włoch, Jerzy; Kowalczyk, Piotr; Rychlicki, Gerhard

    2007-11-28

    A plausible model for the structure of non-graphitizing carbon is one which consists of curved, fullerene-like fragments grouped together in a random arrangement. Although this model was proposed several years ago, there have been no attempts to calculate the properties of such a structure. Here, we determine the density, pore size distribution and adsorption properties of a model porous carbon constructed from fullerene-like elements. Using the method proposed recently by Bhattacharya and Gubbins (BG), which was tested in this study for ideal and defective carbon slits, the pore size distributions (PSDs) of the initial model and two related carbon models are calculated. The obtained PSD curves show that two structures are micro-mesoporous (with different ratio of micro/mesopores) and the third is strictly microporous. Using the grand canonical Monte Carlo (GCMC) method, adsorption isotherms of Ar (87 K) are simulated for all the structures. Finally PSD curves are calculated using the Horvath-Kawazoe, non-local density functional theory (NLDFT), Nguyen and Do, and Barrett-Joyner-Halenda (BJH) approaches, and compared with those predicted by the BG method. This is the first study in which different methods of calculation of PSDs for carbons from adsorption data can be really verified, since absolute (i.e. true) PSDs are obtained using the BG method. This is also the first study reporting the results of computer simulations of adsorption on fullerene-like carbon models.

  13. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection

    PubMed Central

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-01-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107

  14. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.

    PubMed

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-11-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.

  15. Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels

    PubMed Central

    Jiang, Nan; Ma, Shaochun

    2015-01-01

    In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels. PMID:28793631

  16. Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels.

    PubMed

    Jiang, Nan; Ma, Shaochun

    2015-10-27

    In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels.

  17. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  18. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.

  19. Standard Reference Line Combined with One-Point Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) to Quantitatively Analyze Stainless and Heat Resistant Steel.

    PubMed

    Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong

    2018-01-01

    Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.

  20. 75 FR 79091 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.

  1. Identification of Steady and Non-Steady Gait of Humanexoskeleton Walking System

    NASA Astrophysics Data System (ADS)

    Żur, K. K.

    2013-08-01

    In this paper a method of analysis of exoskeleton multistep locomotion was presented by using a computer with the preinstalled DChC program. The paper also presents a way to analytically calculate the ",motion indicator", as well as the algorithm calculating its two derivatives. The algorithm developed by the author processes data collected from the investigation and then a program presents the obtained final results. Research into steady and non-steady multistep locomotion can be used to design two-legged robots of DAR type and exoskeleton control system

  2. Elastic and viscoelastic calculations of stresses in sedimentary basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less

  3. Wave vector modification of the infinite order sudden approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachs, J.G.; Bowman, J.M.

    1980-10-15

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less

  4. Wave vector modification of the infinite order sudden approximation

    NASA Astrophysics Data System (ADS)

    Sachs, Judith Grobe; Bowman, Joel M.

    1980-10-01

    A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.

  5. A consensus reaching model for 2-tuple linguistic multiple attribute group decision making with incomplete weight information

    NASA Astrophysics Data System (ADS)

    Zhang, Wancheng; Xu, Yejun; Wang, Huimin

    2016-01-01

    The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.

  6. A comparison study of size-specific dose estimate calculation methods.

    PubMed

    Parikh, Roshni A; Wien, Michael A; Novak, Ronald D; Jordan, David W; Klahr, Paul; Soriano, Stephanie; Ciancibello, Leslie; Berlin, Sheila C

    2018-01-01

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ c ) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI vol, there was poor correlation, ρ c <0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide acceptable dose estimates for pediatric patients <30 cm in body width. Body weight provides a quick and practical method to identify conversion factors that can be used to estimate SSDE with reasonable accuracy in pediatric patients with body width ≥20 cm.

  7. Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models

    PubMed Central

    Fodor, Nándor

    2012-01-01

    In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451

  8. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  9. The New Performance Calculation Method of Fouled Axial Flow Compressor

    PubMed Central

    Xu, Hong

    2014-01-01

    Fouling is the most important performance degradation factor, so it is necessary to accurately predict the effect of fouling on engine performance. In the previous research, it is very difficult to accurately model the fouled axial flow compressor. This paper develops a new performance calculation method of fouled multistage axial flow compressor based on experiment result and operating data. For multistage compressor, the whole compressor is decomposed into two sections. The first section includes the first 50% stages which reflect the fouling level, and the second section includes the last 50% stages which are viewed as the clean stage because of less deposits. In this model, the performance of the first section is obtained by combining scaling law method and linear progression model with traditional stage stacking method; simultaneously ambient conditions and engine configurations are considered. On the other hand, the performance of the second section is calculated by averaged infinitesimal stage method which is based on Reynolds' law of similarity. Finally, the model is successfully applied to predict the 8-stage axial flow compressor and 16-stage LM2500-30 compressor. The change of thermodynamic parameters such as pressure ratio, efficiency with the operating time, and stage number is analyzed in detail. PMID:25197717

  10. Theoretical discrepancy between cage size and efficient tibial tuberosity advancement in dogs treated for cranial cruciate ligament rupture.

    PubMed

    Etchepareborde, S; Mills, J; Busoni, V; Brunel, L; Balligand, M

    2011-01-01

    To calculate the difference between the desired tibial tuberosity advancement (TTA) along the tibial plateau axis and the advancement truly achieved in that direction when cage size has been determined using the method of Montavon and colleagues. To measure the effect of this difference on the final patellar tendon-tibial plateau angle (PTA) in relation to the ideal 90°. Trigonometry was used to calculate the theoretical actual advancement of the tibial tuberosity in a direction parallel to the tibial plateau that would be achieved by the placement of a cage at the level of the tibial tuberosity in the osteotomy plane of the tibial crest. The same principle was used to calculate the size of the cage that would have been required to achieve the desired advancement. The effect of the difference between the desired advancement and the actual advancement achieved on the final PTA was calculated. For a given desired advancement, the greater the tibial plateau angle (TPA), the greater the difference between the desired advancement and the actual advancement achieved. The maximum discrepancy calculated was 5.8 mm for a 12 mm advancement in a case of extreme TPA (59°). When the TPA was less than 31°, the PTA was in the range of 90° to 95°. A discrepancy does exist between the desired tibial tuberosity advancement and the actual advancement in a direction parallel to the TPA, when the tibial tuberosity is not translated proximally. Although this has an influence on the final PTA, further studies are warranted to evaluate whether this is clinically significant.

  11. An empirical study using permutation-based resampling in meta-regression

    PubMed Central

    2012-01-01

    Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815

  12. A new leakage measurement method for damaged seal material

    NASA Astrophysics Data System (ADS)

    Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng

    2018-07-01

    In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.

  13. The vector radiative transfer numerical model of coupled ocean-atmosphere system using the matrix-operator method

    NASA Astrophysics Data System (ADS)

    Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu

    2005-10-01

    The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.

  14. Noise Reduction Design of the Volute for a Centrifugal Compressor

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong

    2017-08-01

    In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.

  15. Development of a model for on-line control of crystal growth by the AHP method

    NASA Astrophysics Data System (ADS)

    Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.

    2007-05-01

    The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.

  16. Development of a design basis tornado and structural design criteria for the Nevada Test Site, Nevada. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, J.R.; Minor, J.E.; Mehta, K.C.

    1975-06-01

    In order to evaluate the ability of critical facilities at the Nevada Test Site to withstand the possible damaging effects of extreme winds and tornadoes, parameters for the effects of tornadoes and extreme winds and structural design criteria for the design and evaluation of structures were developed. The meteorological investigations conducted are summarized, and techniques used for developing the combined tornado and extreme wind risk model are discussed. The guidelines for structural design include methods for calculating pressure distributions on walls and roofs of structures and methods for accommodating impact loads from wind-driven missiles. Calculations for determining the design loadsmore » for an example structure are included. (LCL)« less

  17. Analysis of stationary availability factor of two-level backbone computer networks with arbitrary topology

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the two-level backbone computer networks with arbitrary topology. A specialized method, offered by the author for calculation of the stationary availability factor of the two-level backbone computer networks, based on the Markov reliability models for the set of the independent repairable elements with the given failure and repair rates and the methods of the discrete mathematics, is also discussed. A specialized algorithm, offered by the author for analysis of the network connectivity, taking into account different kinds of the network equipment failures, is also observed. Finally, this paper presents an example of calculation of the stationary availability factor for the backbone computer network with the given topology.

  18. Adjacent bin stability evaluating for feature description

    NASA Astrophysics Data System (ADS)

    Nie, Dongdong; Ma, Qinyong

    2018-04-01

    Recent study improves descriptor performance by accumulating stability votes for all scale pairs to compose the local descriptor. We argue that the stability of a bin depends on the differences across adjacent pairs more than the differences across all scale pairs, and a new local descriptor is composed based on the hypothesis. A series of SIFT descriptors are extracted from multiple scales firstly. Then the difference value of the bin across adjacent scales is calculated, and the stability value of a bin is calculated based on it and accumulated to compose the final descriptor. The performance of the proposed method is evaluated with two popular matching datasets, and compared with other state-of-the-art works. Experimental results show that the proposed method performs satisfactorily.

  19. Maintaining a Critical Spectra within Monteburns for a Gas-Cooled Reactor Array by Way of Control Rod Manipulation

    DOE PAGES

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.; ...

    2016-10-01

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  20. Evaluation of various thrust calculation techniques on an F404 engine

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1990-01-01

    In support of performance testing of the X-29A aircraft at the NASA-Ames, various thrust calculation techniques were developed and evaluated for use on the F404-GE-400 engine. The engine was thrust calibrated at NASA-Lewis. Results from these tests were used to correct the manufacturer's in-flight thrust program to more accurately calculate thrust for the specific test engine. Data from these tests were also used to develop an independent, simplified thrust calculation technique for real-time thrust calculation. Comparisons were also made to thrust values predicted by the engine specification model. Results indicate uninstalled gross thrust accuracies on the order of 1 to 4 percent for the various in-flight thrust methods. The various thrust calculations are described and their usage, uncertainty, and measured accuracies are explained. In addition, the advantages of a real-time thrust algorithm for flight test use and the importance of an accurate thrust calculation to the aircraft performance analysis are described. Finally, actual data obtained from flight test are presented.

  1. 10 CFR 431.264 - Uniform test method for the measurement of flow rate for commercial prerinse spray valves.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., the water consumption flow rate of commercial prerinse spray valves. (b) Testing and Calculations. The test procedure to determine the water consumption flow rate for prerinse spray valves, expressed in... the previous step. Round the final water consumption value to one decimal place as follows: (1) A...

  2. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  3. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  4. 14 CFR Appendix B of Part 415 - Safety Review Document Outline

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Performed by Certified Personnel 4.0Flight Safety (§ 415.115) 4.1Initial Flight Safety Analysis 4.1.1Flight Safety Sub-Analyses, Methods, and Assumptions 4.1.2Sample Calculation and Products 4.1.3 Launch Specific Updates and Final Flight Safety Analysis Data 4.2Radionuclide Data (where applicable) 4.3Flight Safety...

  5. Correlation Energies from the Two-Component Random Phase Approximation.

    PubMed

    Kühn, Michael

    2014-02-11

    The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.

  6. An improved computer program for calculating the theoretical performance parameters of a propeller type wind turbine. An appendix to the final report on feasibility of using wind power to pump irrigation water (Texas). [PROP Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barieau, R.E.

    1977-03-01

    The PROP Program of Wilson and Lissaman has been modified by adding the Newton-Raphson Method and a Step Wise Search Method, as options for the method of solution. In addition, an optimization method is included. Twist angles, tip speed ratio and the pitch angle may be varied to produce maximum power coefficient. The computer program listing is presented along with sample input and output data. Further improvements to the program are discussed.

  7. Study on a Multi-Frequency Homotopy Analysis Method for Period-Doubling Solutions of Nonlinear Systems

    NASA Astrophysics Data System (ADS)

    Fu, H. X.; Qian, Y. H.

    In this paper, a modification of homotopy analysis method (HAM) is applied to study the two-degree-of-freedom coupled Duffing system. Firstly, the process of calculating the two-degree-of-freedom coupled Duffing system is presented. Secondly, the single periodic solutions and double periodic solutions are obtained by solving the constructed nonlinear algebraic equations. Finally, comparing the periodic solutions obtained by the multi-frequency homotopy analysis method (MFHAM) and the fourth-order Runge-Kutta method, it is found that the approximate solution agrees well with the numerical solution.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  9. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  10. Calculating the refractive index for pediatric parenteral nutrient solutions.

    PubMed

    Nelson, Scott; Barrows, Jason; Haftmann, Richard; Helm, Michael; MacKay, Mark

    2013-02-15

    The utility of refractometric analysis for calculating the refractive index (RI) of compounded parenteral nutrient solutions for pediatric patients was examined. An equation for calculating the RI of parenteral nutrient solutions was developed by chemical and linear regression analysis of 154 pediatric parenteral nutrient solutions. This equation was then validated by analyzing 1057 pediatric parenteral nutrition samples. The RI for the parenteral nutrient solutions could be calculated by summing the RI contribution for each ingredient and then adding the RI of water. The RI contribution for each ingredient was determined by multiplying the RI of the manufacturer's concentrate by the volume of the manufacturer's concentrate mixed into the parenteral nutrient solution divided by the total volume of the parenteral nutrient solution. The calculated RI was highly correlated with the measured RI (R(2) = 0.94, p < 0.0001). Using a range of two standard deviations (±0.0045), 99.8% of the samples fell into the comparative range. RIs of electrolytes, vitamins, and trace elements in the concentrations used did not affect the RI, similar to the findings of other studies. There was no statistical difference between the calculated RI and the measured RI in the final product of a pediatric parenteral nutrient solution. This method of quality control can be used by personnel compounding parenteral nutrient solutions to confirm the compounding accuracy of dextrose and amino acid concentrations in the final product, and a sample can be sent to the hospital laboratory for electrolyte verification.

  11. Disconnected Diagrams in Lattice QCD

    NASA Astrophysics Data System (ADS)

    Gambhir, Arjun Singh

    In this work, we present state-of-the-art numerical methods and their applications for computing a particular class of observables using lattice quantum chromodynamics (Lattice QCD), a discretized version of the fundamental theory of quarks and gluons. These observables require calculating so called "disconnected diagrams" and are important for understanding many aspects of hadron structure, such as the strange content of the proton. We begin by introducing the reader to the key concepts of Lattice QCD and rigorously define the meaning of disconnected diagrams through an example of the Wick contractions of the nucleon. Subsequently, the calculation of observables requiring disconnected diagrams is posed as the computationally challenging problem of finding the trace of the inverse of an incredibly large, sparse matrix. This is followed by a brief primer of numerical sparse matrix techniques that overviews broadly used methods in Lattice QCD and builds the background for the novel algorithm presented in this work. We then introduce singular value deflation as a method to improve convergence of trace estimation and analyze its effects on matrices from a variety of fields, including chemical transport modeling, magnetohydrodynamics, and QCD. Finally, we apply this method to compute observables such as the strange axial charge of the proton and strange sigma terms in light nuclei. The work in this thesis is innovative for four reasons. First, we analyze the effects of deflation with a model that makes qualitative predictions about its effectiveness, taking only the singular value spectrum as input, and compare deflated variance with different types of trace estimator noise. Second, the synergy between probing methods and deflation is investigated both experimentally and theoretically. Third, we use the synergistic combination of deflation and a graph coloring algorithm known as hierarchical probing to conduct a lattice calculation of light disconnected matrix elements of the nucleon at two different values of the lattice spacing. Finally, we employ these algorithms to do a high-precision study of strange sigma terms in light nuclei; to our knowledge this is the first calculation of its kind from Lattice QCD.

  12. Disconnected Diagrams in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun

    In this work, we present state-of-the-art numerical methods and their applications for computing a particular class of observables using lattice quantum chromodynamics (Lattice QCD), a discretized version of the fundamental theory of quarks and gluons. These observables require calculating so called \\disconnected diagrams" and are important for understanding many aspects of hadron structure, such as the strange content of the proton. We begin by introducing the reader to the key concepts of Lattice QCD and rigorously define the meaning of disconnected diagrams through an example of the Wick contractions of the nucleon. Subsequently, the calculation of observables requiring disconnected diagramsmore » is posed as the computationally challenging problem of finding the trace of the inverse of an incredibly large, sparse matrix. This is followed by a brief primer of numerical sparse matrix techniques that overviews broadly used methods in Lattice QCD and builds the background for the novel algorithm presented in this work. We then introduce singular value deflation as a method to improve convergence of trace estimation and analyze its effects on matrices from a variety of fields, including chemical transport modeling, magnetohydrodynamics, and QCD. Finally, we apply this method to compute observables such as the strange axial charge of the proton and strange sigma terms in light nuclei. The work in this thesis is innovative for four reasons. First, we analyze the effects of deflation with a model that makes qualitative predictions about its effectiveness, taking only the singular value spectrum as input, and compare deflated variance with different types of trace estimator noise. Second, the synergy between probing methods and deflation is investigated both experimentally and theoretically. Third, we use the synergistic combination of deflation and a graph coloring algorithm known as hierarchical probing to conduct a lattice calculation of light disconnected matrix elements of the nucleon at two different values of the lattice spacing. Finally, we employ these algorithms to do a high-precision study of strange sigma terms in light nuclei; to our knowledge this is the first calculation of its kind from Lattice QCD.« less

  13. Quantification of pulmonary vessel diameter in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate

    2015-03-01

    Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.

  14. Cross-Cultural Adaptation and Validation of the MPAM-R to Brazilian Portuguese and Proposal of a New Method to Calculate Factor Scores

    PubMed Central

    Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.

    2017-01-01

    In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203

  15. Development of Uav Photogrammetry Method by Using Small Number of Vertical Images

    NASA Astrophysics Data System (ADS)

    Kunii, Y.

    2018-05-01

    This new and efficient photogrammetric method for unmanned aerial vehicles (UAVs) requires only a few images taken in the vertical direction at different altitudes. The method includes an original relative orientation procedure which can be applied to images captured along the vertical direction. The final orientation determines the absolute orientation for every parameter and is used for calculating the 3D coordinates of every measurement point. The measurement accuracy was checked at the UAV test site of the Japan Society for Photogrammetry and Remote Sensing. Five vertical images were taken at 70 to 90 m altitude. The 3D coordinates of the measurement points were calculated. The plane and height accuracies were ±0.093 m and ±0.166 m, respectively. These values are of higher accuracy than the results of the traditional photogrammetric method. The proposed method can measure 3D positions efficiently and would be a useful tool for construction and disaster sites and for other field surveying purposes.

  16. A new method to measure electron density and effective atomic number using dual-energy CT images

    NASA Astrophysics Data System (ADS)

    Ramos Garcia, Luis Isaac; Pérez Azorin, José Fernando; Almansa, Julio F.

    2016-01-01

    The purpose of this work is to present a new method to extract the electron density ({ρ\\text{e}} ) and the effective atomic number (Z eff) from dual-energy CT images, based on a Karhunen-Loeve expansion (KLE) of the atomic cross section per electron. This method was used to calibrate a Siemens Definition CT using the CIRS phantom. The predicted electron density and effective atomic number using 80 kVp and 140 kVp were compared with a calibration phantom and an independent set of samples. The mean absolute deviations between the theoretical and calculated values for all the samples were 1.7 %  ±  0.1 % for {ρ\\text{e}} and 4.1 %  ±  0.3 % for Z eff. Finally, these results were compared with other stoichiometric method. The application of the KLE to represent the atomic cross section per electron is a promising method for calculating {ρ\\text{e}} and Z eff using dual-energy CT images.

  17. Comparison of Three Methods of Calculation, Experimental and Monte Carlo Simulation in Investigation of Organ Doses (Thyroid, Sternum, Cervical Vertebra) in Radioiodine Therapy

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Ayat, Saba

    2012-01-01

    Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806

  18. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    PubMed

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Comprehensive Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 1. An Advanced Protocol for Molecular Dynamics Simulations and Collision Cross-Section Calculation.

    PubMed

    Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J Valentine, Stephen

    2017-05-01

    Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed. Graphical Abstract ᅟ.

  20. Comprehensive Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 1. An Advanced Protocol for Molecular Dynamics Simulations and Collision Cross-Section Calculation

    NASA Astrophysics Data System (ADS)

    Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J. Valentine, Stephen

    2017-05-01

    Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed.

  1. Theoretical investigations on molecular structure, vibrational spectra, HOMO, LUMO, NBO analysis and hyperpolarizability calculations of thiophene-2-carbohydrazide.

    PubMed

    Balachandran, V; Janaki, A; Nataraj, A

    2014-01-24

    The Fourier-Transform infrared and Fourier-Transform Raman spectra of thiophene-2-carbohydrazide (TCH) was recorded in the region 4000-400 cm(-1) and 3500-100 cm(-1). Quantum chemical calculations of energies, geometrical structure and vibrational wavenumbers of TCH were carried out by DFT (B3LYP) method with 6-311++G(d,p) as basis set. The difference between the observed and scaled wavenumber values of most of the fundamentals is very small. Stability of the molecule arising from hyper conjugative interaction and charge delocalization has been analyzed using natural bond orbital (NBO) analysis. UV spectrum was measured in different solvent. The energy and oscillator strength are calculated by Time Dependant Density Functional Theory (TD-DFT) results. The calculated HOMO and LUMO energies also confirm that charge transfer occurs within the molecule. The complete assignments were performed on the basis of the potential energy distribution (PED) of vibrational modes, calculated with scaled quantum mechanics (SQM) method. Finally the theoretical FT-IR, FT-Raman, and UV spectra of the title molecule have also been constructed. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  3. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  4. Application of geometric algebra for the description of polymer conformations.

    PubMed

    Chys, Pieter

    2008-03-14

    In this paper a Clifford algebra-based method is applied to calculate polymer chain conformations. The approach enables the calculation of the position of an atom in space with the knowledge of the bond length (l), valence angle (theta), and rotation angle (phi) of each of the preceding bonds in the chain. Hence, the set of geometrical parameters {l(i),theta(i),phi(i)} yields all the position coordinates p(i) of the main chain atoms. Moreover, the method allows the calculation of side chain conformations and the computation of rotations of chain segments. With these features it is, in principle, possible to generate conformations of any type of chemical structure. This method is proposed as an alternative for the classical approach by matrix algebra. It is more straightforward and its final symbolic representation considerably simpler than that of matrix algebra. Approaches for realistic modeling by means of incorporation of energetic considerations can be combined with it. This article, however, is entirely focused at showing the suitable mathematical framework on which further developments and applications can be built.

  5. Relativistic nuclear magnetic resonance J-coupling with ultrasoft pseudopotentials and the zeroth-order regular approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Timothy F. G., E-mail: tim.green@materials.ox.ac.uk; Yates, Jonathan R., E-mail: jonathan.yates@materials.ox.ac.uk

    2014-06-21

    We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing themore » heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, {sup 1}J(P-Ag) and {sup 2}J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW.« less

  6. Head rice rate measurement based on concave point matching

    PubMed Central

    Yao, Yuan; Wu, Wei; Yang, Tianle; Liu, Tao; Chen, Wen; Chen, Chen; Li, Rui; Zhou, Tong; Sun, Chengming; Zhou, Yue; Li, Xinlu

    2017-01-01

    Head rice rate is an important factor affecting rice quality. In this study, an inflection point detection-based technology was applied to measure the head rice rate by combining a vibrator and a conveyor belt for bulk grain image acquisition. The edge center mode proportion method (ECMP) was applied for concave points matching in which concave matching and separation was performed with collaborative constraint conditions followed by rice length calculation with a minimum enclosing rectangle (MER) to identify the head rice. Finally, the head rice rate was calculated using the sum area of head rice to the overall coverage of rice. Results showed that bulk grain image acquisition can be realized with test equipment, and the accuracy rate of separation of both indica rice and japonica rice exceeded 95%. An increase in the number of rice did not significantly affect ECMP and MER. High accuracy can be ensured with MER to calculate head rice rate by narrowing down its relative error between real values less than 3%. The test results show that the method is reliable as a reference for head rice rate calculation studies. PMID:28128315

  7. Power flows and Mechanical Intensities in structural finite element analysis

    NASA Technical Reports Server (NTRS)

    Hambric, Stephen A.

    1989-01-01

    The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.

  8. Evaluation of radiographic interpretation competence of veterinary students in Finland.

    PubMed

    Koskinen, Heli I; Snellman, Marjatta

    2009-01-01

    In the evaluation of the clinical competence of veterinary students, many different definitions and methods are approved. Due to the increasing discussion of the quality of outcomes produced by newly graduated veterinarians, methods for the evaluation of clinical competencies should also be evaluated. In this study, this was done by comparing two qualitative evaluation schemes: the well-known structure of observed learning outcome (SOLO) taxonomy and a modification of this taxonomy. A case-based final radiologic examination was selected and the investigation was performed by classifying students' outcomes. These classes were finally put next to original (quantitative) scores and the statistical calculations were initiated. Significant correlations between taxonomies (0.53) and the modified taxonomy and original scores (0.66) were found and some qualitative similarities between evaluation methods were observed. In addition, some supplements were recommended for the structure of evaluation schemes, especially for the structure of the modified SOLO taxonomy.

  9. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  10. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    PubMed Central

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  11. Practical method for balancing airplane moments

    NASA Technical Reports Server (NTRS)

    Hamburger, H

    1924-01-01

    The present contribution is the sequel to a paper written by Messrs. R. Fuchs, L. Hopf, and H. Hamburger, and proposes to show that the methods therein contained can be practically utilized in computations. Furthermore, the calculations leading up to the diagram of moments for three airplanes, whose performance in war service gave reason for complaint, are analyzed. Finally, it is shown what conclusions can be drawn from the diagram of moments with regard to the defects in these planes and what steps may be taken to remedy them.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    The correspondence residuals due to the discrepancy between the reality and the shape model in use are analyzed for the modal phase measuring deflectometry. Slope residuals are calculated from these discrepancies between the modal estimation and practical acquisition. Since the shape mismatch mainly occurs locally, zonal integration methods which are good at dealing with local variations are used to reconstruct the height residual for compensation. Finally, results of both simulation and experiment indicate the proposed height compensation method is effective, which can be used as a post-complement for the modal phase measuring deflectometry.

  13. Investigation of deformation at a centrifugal compressor rotor in process of interference on shaft

    NASA Astrophysics Data System (ADS)

    Shamim, M. R.; Berezhnoi, D. V.

    2016-11-01

    In this paper, according to the finite element method, we had implemented “master- slave” method of contact interaction in elastic deformable bodies, with consider of the friction in the contact zone. We had compiled the orientation of solving extremum problems with inequality restrictions, projection algorithm, which called “the closest point projection algorithm”. Finally, an example, had brought to show the calculation of the rotor nozzle centrifugal compressor on the shaft with interference.

  14. Accurately Calculating the Solar Orientation of the TIANGONG-2 Ultraviolet Forward Spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Li, S.

    2018-04-01

    The Ultraviolet Forward Spectrometer is a new type of spectrometer for monitoring the vertical distribution of atmospheric trace gases in the global middle atmosphere. It is on the TianGong-2 space laboratory, which was launched on 15 September 2016. The spectrometer uses a solar calibration mode to modify its irradiance. Accurately calculating the solar orientation is a prerequisite of spectral calibration for the Ultraviolet Forward Spectrometer. In this paper, a method of calculating the solar orientation is proposed according to the imaging geometric characteristics of the spectrometer. Firstly, the solar orientation in the horizontal rectangular coordinate system is calculated based on the solar declination angle algorithm proposed by Bourges and the solar hour angle algorithm proposed by Lamm. Then, the solar orientation in the sensor coordinate system is achieved through several coordinate system transforms. Finally, we calculate the solar orientation in the sensor coordinate system and evaluate its calculation accuracy using actual orbital data of TianGong-2. The results show that the accuracy is close to the simulation method with STK (Satellite Tool Kit), and the error is not more than 2 %. The algorithm we present does not need a lot of astronomical knowledge, but only needs some observation parameters provided by TianGong-2.

  15. Introduction to molecular topology: basic concepts and application to drug design.

    PubMed

    Gálvez, Jorge; Gálvez-Llompart, María; García-Domenech, Ramón

    2012-09-01

    In this review it is dealt the use of molecular topology (MT) in the selection and design of new drugs. After an introduction of the actual methods used for drug design, the basic concepts of MT are defined, including examples of calculation of topological indices, which are numerical descriptors of molecular structures. The goal is making this calculation familiar to the potential students and allowing a straightforward comprehension of the topic. Finally, the achievements obtained in this field are detailed, so that the reader can figure out the great interest of this approach.

  16. Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering

    NASA Astrophysics Data System (ADS)

    Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten

    2015-04-01

    The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.

  17. Solar cell radiation handbook

    NASA Technical Reports Server (NTRS)

    Tada, H. Y.; Carter, J. R., Jr.

    1977-01-01

    Solar cell theory cells are manufactured, and how they are modeled mathematically is reviewed. The interaction of energetic charged particle radiation with solar cells is discussed in detail and the concept of 1 MeV equivalent electron fluence is introduced. The space radiation environment is described and methods of calculating equivalent fluences for the space environment are developed. A computer program was written to perform the equivalent fluence calculations and a FORTRAN listing of the program is included. Finally, an extensive body of data detailing the degradation of solar cell electrical parameters as a function of 1 MeV electron fluence is presented.

  18. QCD Resummation for Single Spin Asymmetries

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Xiao, Bo-Wen; Yuan, Feng

    2011-10-01

    We study the transverse momentum dependent factorization for single spin asymmetries in Drell-Yan and semi-inclusive deep inelastic scattering processes at one-loop order. The next-to-leading order hard factors are calculated in the Ji-Ma-Yuan factorization scheme. We further derive the QCD resummation formalisms for these observables following the Collins-Soper-Sterman method. The results are expressed in terms of the collinear correlation functions from initial and/or final state hadrons coupled with the Sudakov form factor containing all order soft-gluon resummation effects. The scheme-independent coefficients are calculated up to one-loop order.

  19. QCD Resummation for Single Spin Asymmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang Z.; Xiao, Bo-Wen; Yuan, Feng

    We study the transverse momentum dependent factorization for single spin asymmetries in Drell-Yan and semi-inclusive deep inelastic scattering processes at one-loop order. The next-to-leading order hard factors are calculated in the Ji-Ma-Yuan factorization scheme. We further derive the QCD resummation formalisms for these observables following the Collins-Soper-Sterman method. The results are expressed in terms of the collinear correlation functions from initial and/or final state hadrons coupled with the Sudakov form factor containing all order soft-gluon resummation effects. The scheme-independent coefficients are calculated up to one-loop order.

  20. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  1. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  2. Effect of costing methods on unit cost of hospital medical services.

    PubMed

    Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya

    2007-04-01

    To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.

  3. Speckle reduction in digital holography with resampling ring masks

    NASA Astrophysics Data System (ADS)

    Zhang, Wenhui; Cao, Liangcai; Jin, Guofan

    2018-01-01

    One-shot digital holographic imaging has the advantages of high stability and low temporal cost. However, the reconstruction is affected by the speckle noise. Resampling ring-mask method in spectrum domain is proposed for speckle reduction. The useful spectrum of one hologram is divided into several sub-spectra by ring masks. In the reconstruction, angular spectrum transform is applied to guarantee the calculation accuracy which has no approximation. N reconstructed amplitude images are calculated from the corresponding sub-spectra. Thanks to speckle's random distribution, superimposing these N uncorrelated amplitude images would lead to a final reconstructed image with lower speckle noise. Normalized relative standard deviation values of the reconstructed image are used to evaluate the reduction of speckle. Effect of the method on the spatial resolution of the reconstructed image is also quantitatively evaluated. Experimental and simulation results prove the feasibility and effectiveness of the proposed method.

  4. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  5. Remotely detected vehicle mass from engine torque-induced frame twisting

    DOE PAGES

    McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.; ...

    2017-06-08

    Determining the mass of a vehicle from ground-based passive sensor data is important for many traffic safety requirements. This paper presents a method for calculating the mass of a vehicle using ground-based video and acoustic measurements. By assuming that no energy is lost in the conversion, the mass of a vehicle can be calculated from the rotational energy generated by the vehicle’s engine and the linear acceleration of the vehicle over a period of time. The amount of rotational energy being output by the vehicle’s engine can be calculated from its torque and angular velocity. This model relates remotely observed,more » engine torque-induced frame twist to engine torque output using the vehicle’s suspension parameters and engine geometry. The angular velocity of the engine is extracted from the acoustic emission of the engine, and the linear acceleration of the vehicle is calculated by remotely observing the position of the vehicle over time. This method combines these three dynamic signals; engine induced-frame twist, engine angular velocity, and the vehicle’s linear acceleration, and three vehicle specific scalar parameters, into an expression that describes the mass of the vehicle. Finally, this method was tested on a semitrailer truck, and the results demonstrate a correlation of 97.7% between calculated and true vehicle mass.« less

  6. Remotely detected vehicle mass from engine torque-induced frame twisting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.

    Determining the mass of a vehicle from ground-based passive sensor data is important for many traffic safety requirements. This paper presents a method for calculating the mass of a vehicle using ground-based video and acoustic measurements. By assuming that no energy is lost in the conversion, the mass of a vehicle can be calculated from the rotational energy generated by the vehicle’s engine and the linear acceleration of the vehicle over a period of time. The amount of rotational energy being output by the vehicle’s engine can be calculated from its torque and angular velocity. This model relates remotely observed,more » engine torque-induced frame twist to engine torque output using the vehicle’s suspension parameters and engine geometry. The angular velocity of the engine is extracted from the acoustic emission of the engine, and the linear acceleration of the vehicle is calculated by remotely observing the position of the vehicle over time. This method combines these three dynamic signals; engine induced-frame twist, engine angular velocity, and the vehicle’s linear acceleration, and three vehicle specific scalar parameters, into an expression that describes the mass of the vehicle. Finally, this method was tested on a semitrailer truck, and the results demonstrate a correlation of 97.7% between calculated and true vehicle mass.« less

  7. Ab initio calculations of the magnetic properties of TM (Ti, V)-doped zinc-blende ZnO

    NASA Astrophysics Data System (ADS)

    Goumrhar, F.; Bahmad, L.; Mounkachi, O.; Benyoussef, A.

    2018-01-01

    In order to promote suitable material to be used in spintronics devices, this study purposes to evaluate the magnetic properties of the titanium and vanadium-doped zinc-blende ZnO from first-principles. The calculations of these properties are based on the Korringa-Kohn-Rostoker (KKR) method combined with the coherent potential approximation (CPA), using the local density approximation (LDA). We have calculated and discussed the density of states (DOSs) in the energy phase diagrams for different concentration values, of the dopants. We have also investigated the magnetic and half-metallic properties of this doped compound. Additionally, we showed the mechanism of the exchange coupling interaction. Finally, we estimated and studied the Curie temperature for different concentrations.

  8. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. An Alternative Lunar Ephemeris Model for On-Board Flight Software Use

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    1998-01-01

    In calculating the position vector of the Moon in on-board flight software, one often begins by using a series expansion to calculate the ecliptic latitude and longitude of the Moon, referred to the mean ecliptic and equinox of date. One then performs a reduction for precession, followed by a rotation of the position vector from the ecliptic plane to the equator, and a transformation from spherical to Cartesian coordinates before finally arriving at the desired result: equatorial J2000 Cartesian components of the lunar position vector. An alternative method is developed here in which the equatorial J2000 Cartesian components of the lunar position vector are calculated directly by a series expansion, saving valuable on-board computer resources.

  10. 75 FR 66347 - Brass Sheet and Strip From Germany: Amended Final Results of Antidumping Duty Administrative Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-28

    ... included as BPI in the Cost and Sales Calculation Memorandum for reference.\\1\\ There was no substantive... Halper, titled ``Cost of Production and Constructed Value Calculation Adjustments for the Final Results... October 12, 2010 (``Cost and Sales Calculation Memorandum''). On April 13, 2010, the Department of...

  11. The Radial Distribution Function (RDF) of Amorphous Selenium Obtained through the Vacuum Evaporator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guda, Bardhyl; Dede, Marie

    2010-01-21

    After the amorphous selenium obtained through the vacuum evaporator, the relevant diffraction intensity is taken and its processing is made. Further on the interferential function is calculated and the radial density function is defined. For determining these functions are used two methods, which were compared with each other and finally are received results for amorphous selenium RDF.

  12. Eddy current sensing of intermetallic composite consolidation

    NASA Technical Reports Server (NTRS)

    Dharmasena, Kumar P.; Wadley, Haydn N. G.

    1991-01-01

    A finite element method is used to explore the feasibility and optimization of a probe-type eddy current sensor for determining the thickness of plate specimens during a hot isostatic pressing cycle. The dependence of the sensor's impedance upon sample-sensor separation in the high frequency limit is calculated, and factors that maximize sensitivity to the final stages of densification are identified.

  13. Using polarized Raman spectroscopy and the pseudospectral method to characterize molecular structure and function

    NASA Astrophysics Data System (ADS)

    Weisman, Andrew L.

    Electronic structure calculation is an essential approach for determining the structure and function of molecules and is therefore of critical interest to physics, chemistry, and materials science. Of the various algorithms for calculating electronic structure, the pseudospectral method is among the fastest. However, the trade-off for its speed is more up-front programming and testing, and as a result, applications using the pseudospectral method currently lag behind those using other methods. In Part I of this dissertation, we first advance the pseudospectral method by optimizing it for an important application, polarized Raman spectroscopy, which is a well-established tool used to characterize molecular properties. This is an application of particular importance because often the easiest and most economical way to obtain the polarized Raman spectrum of a material is to simulate it; thus, utilization of the pseudospectral method for this purpose will accelerate progress in the determination of molecular properties. We demonstrate that our implementation of Raman spectroscopy using the pseudospectral method results in spectra that are just as accurate as those calculated using the traditional analytic method, and in the process, we derive the most comprehensive formulation to date of polarized Raman intensity formulas, applicable to both crystalline and isotropic systems. Next, we apply our implementation to determine the orientations of crystalline oligothiophenes -- a class of materials important in the field of organic electronics -- achieving excellent agreement with experiment and demonstrating the general utility of polarized Raman spectroscopy for the determination of crystal orientation. In addition, we derive from first-principles a method for using polarized Raman spectra to establish unambiguously whether a uniform region of a material is crystalline or isotropic. Finally, we introduce free, open-source software that allows a user to determine any of a number of polarized Raman properties of a sample given common output from electronic structure calculations. In Part II, we apply the pseudospectral method to other areas of scientific importance requiring a deeper understanding of molecular structure and function. First, we use it to accurately determine the frequencies of vibrational tags on biomolecules that can be detected in real-time using stimulated Raman spectroscopy. Next, we evaluate the performance of the pseudospectral method for calculating excited-state energies and energy gradients of large molecules -- another new application of the pseudospectral method -- showing that the calculations run much more quickly than those using the analytic method. Finally, we use the pseudospectral method to simulate the bottleneck process of a solar cell used for water splitting, a promising technology for converting the sun's energy into hydrogen fuel. We apply the speed of the pseudospectral method by modeling the relevant part of the system as a large, explicitly passivated titanium dioxide nanoparticle and simulating it realistically using hybrid density functional theory with an implicit solvent model, yielding insight into the physical nature of the rate-limiting step of water splitting. These results further validate the particularly fast and accurate simulation methodologies used, opening the door to efficient and realistic cluster-based, fully quantum-mechanical simulations of the bottleneck process of a promising technology for clean solar energy conversion. Taken together, we show how both polarized Raman spectroscopy and the pseudospectral method are effective tools for analyzing the structure and function of important molecular systems.

  14. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  15. Welding studs detection based on line structured light

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao

    2018-01-01

    The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.

  16. W -Boson Production in Association with a Jet at Next-to-Next-to-Leading Order in Perturbative QCD

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Focke, Christfried; Liu, Xiaohui; Petriello, Frank

    2015-08-01

    We present the complete calculation of W -boson production in association with a jet in hadronic collisions through next-to-next-to-leading order (NNLO) in perturbative QCD. To cancel infrared divergences, we discuss a new subtraction method that exploits the fact that the N -jettiness event-shape variable fully captures the singularity structure of QCD amplitudes with final-state partons. This method holds for processes with an arbitrary number of jets and is easily implemented into existing frameworks for higher-order calculations. We present initial phenomenological results for W +jet production at the LHC. The NNLO corrections are small and lead to a significantly reduced theoretical error, opening the door to precision measurements in the W +jet channel at the LHC.

  17. I-cored Coil Probe Located Above a Conductive Plate with a Surface Hole

    NASA Astrophysics Data System (ADS)

    Tytko, Grzegorz; Dziczkowski, Leszek

    2018-02-01

    This work presents an axially symmetric mathematical model of an I-cored coil placed over a two-layered conductive material with a cylindrical surface hole. The problem was divided into regions for which the magnetic vector potential of a filamentary coil was established applying the truncated region eigenfunction expansion method. Then the final formula was developed to calculate impedance changes for a cylindrical coil with reference to both the air and to a material with no hole. The influence of a surface flaw in the conductive material on the components of coil impedance was examined. Calculations were made in Matlab for a hole with various radii and the results thereof were verified with the finite element method in COMSOL Multiphysics package. Very good consistency was achieved in all cases.

  18. 40 CFR 90.709 - Calculation and reporting of test results.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... expressed to one additional significant figure. (b) Final test results are calculated by summing the initial... applicable standard expressed to one additional significant figure. (c) The final deteriorated test results...

  19. 40 CFR 90.709 - Calculation and reporting of test results.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... expressed to one additional significant figure. (b) Final test results are calculated by summing the initial... applicable standard expressed to one additional significant figure. (c) The final deteriorated test results...

  20. A rough set-based measurement model study on high-speed railway safety operation.

    PubMed

    Hu, Qizhou; Tan, Minjia; Lu, Huapu; Zhu, Yun

    2018-01-01

    Aiming to solve the safety problems of high-speed railway operation and management, one new method is urgently needed to construct on the basis of the rough set theory and the uncertainty measurement theory. The method should carefully consider every factor of high-speed railway operation that realizes the measurement indexes of its safety operation. After analyzing the factors that influence high-speed railway safety operation in detail, a rough measurement model is finally constructed to describe the operation process. Based on the above considerations, this paper redistricts the safety influence factors of high-speed railway operation as 16 measurement indexes which include staff index, vehicle index, equipment index and environment. And the paper also provides another reasonable and effective theoretical method to solve the safety problems of multiple attribute measurement in high-speed railway operation. As while as analyzing the operation data of 10 pivotal railway lines in China, this paper respectively uses the rough set-based measurement model and value function model (one model for calculating the safety value) for calculating the operation safety value. The calculation result shows that the curve of safety value with the proposed method has smaller error and greater stability than the value function method's, which verifies the feasibility and effectiveness.

  1. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  2. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  3. Estimating plant distance in maize using Unmanned Aerial Vehicle (UAV).

    PubMed

    Zhang, Jinshui; Basso, Bruno; Price, Richard F; Putman, Gregory; Shuai, Guanyuan

    2018-01-01

    Distance between rows and plants are essential parameters that affect the final grain yield in row crops. This paper presents the results of research intended to develop a novel method to quantify the distance between maize plants at field scale using an Unmanned Aerial Vehicle (UAV). Using this method, we can recognize maize plants as objects and calculate the distance between plants. We initially developed our method by training an algorithm in an indoor facility with plastic corn plants. Then, the method was scaled up and tested in a farmer's field with maize plant spacing that exhibited natural variation. The results of this study demonstrate that it is possible to precisely quantify the distance between maize plants. We found that accuracy of the measurement of the distance between maize plants depended on the height above ground level at which UAV imagery was taken. This study provides an innovative approach to quantify plant-to-plant variability and, thereby final crop yield estimates.

  4. Improved methods for distribution loss evaluation. Volume 1: analytic and evaluative techniques. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flinn, D.G.; Hall, S.; Morris, J.

    This volume describes the background research, the application of the proposed loss evaluation techniques, and the results. The research identified present loss calculation methods as appropriate, provided care was taken to represent the various system elements in sufficient detail. The literature search of past methods and typical data revealed that extreme caution in using typical values (load factor, etc.) should be taken to ensure that all factors were referred to the same time base (daily, weekly, etc.). The performance of the method (and computer program) proposed in this project was determined by comparison of results with a rigorous evaluation ofmore » losses on the Salt River Project system. This rigorous evaluation used statistical modeling of the entire system as well as explicit enumeration of all substation and distribution transformers. Further tests were conducted at Public Service Electric and Gas of New Jersey to check the appropriateness of the methods in a northern environment. Finally sensitivity tests indicated data elements inaccuracy of which would most affect the determination of losses using the method developed in this project.« less

  5. Calculation methods to perform mass balances of micropollutants in sewage treatment plants. application to pharmaceutical and personal care products (PPCPs).

    PubMed

    Carballa, Marta; Omil, Francisco; Lema, Juan M

    2007-02-01

    Two different methods are proposed to perform the mass balance calculations of micropollutants in sewage treatment plants (STPs). The first method uses the measured data in both liquid and sludge phase and the second one uses the solid-water distribution coefficient (Kd) to calculate the concentrations in the sludge from those measured in the liquid phase. The proposed methodologies facilitate the identification of the main mechanisms involved in the elimination of micropollutants. Both methods are applied for determining mass balances of selected pharmaceutical and personal care products (PPCPs) and their results are discussed. In that way, the fate of 2 musks (galaxolide and tonalide), 3 pharmaceuticals (ibuprofen, naproxen, and sulfamethoxazole), and 2 natural estrogens (estrone and 17beta-estradiol) has been investigated along the different water and sludge treatment units of a STP. Ibuprofen, naproxen, and sulfamethoxazole are biologically degraded in the aeration tank (50-70%), while musks are equally sorbed to the sludge and degraded. In contrast, estrogens are not removed in the STP studied. About 40% of the initial load of pharmaceuticals passes through the plant unaltered, with the fraction associated to sludge lower than 0.5%. In contrast, between 20 and 40% of the initial load of musks leaves the plant associated to solids, with less than 10% present in the final effluent. The results obtained show that the conclusions concerning the efficiency of micropollutants removal in a particular STP may be seriously affected by the calculation method used.

  6. ``Dressing'' lines and vertices in calculations of matrix elements with the coupled-cluster method and determination of Cs atomic properties

    NASA Astrophysics Data System (ADS)

    Derevianko, Andrei; Porsev, Sergey G.

    2005-03-01

    We consider evaluation of matrix elements with the coupled-cluster method. Such calculations formally involve infinite number of terms and we devise a method of partial summation (dressing) of the resulting series. Our formalism is built upon an expansion of the product C†C of cluster amplitudes C into a sum of n -body insertions. We consider two types of insertions: particle (hole) line insertion and two-particle (two-hole) random-phase-approximation-like insertion. We demonstrate how to “dress” these insertions and formulate iterative equations. We illustrate the dressing equations in the case when the cluster operator is truncated at single and double excitations. Using univalent systems as an example, we upgrade coupled-cluster diagrams for matrix elements with the dressed insertions and highlight a relation to pertinent fourth-order diagrams. We illustrate our formalism with relativistic calculations of the hyperfine constant A(6s) and the 6s1/2-6p1/2 electric-dipole transition amplitude for the Cs atom. Finally, we augment the truncated coupled-cluster calculations with otherwise omitted fourth order diagrams. The resulting analysis for Cs is complete through the fourth order of many-body perturbation theory and reveals an important role of triple and disconnected quadruple excitations.

  7. Capture and dissociation in the complex-forming CH + H2 → CH2 + H, CH + H2 reactions.

    PubMed

    González, Miguel; Saracibar, Amaia; Garcia, Ernesto

    2011-02-28

    The rate coefficients for the capture process CH + H(2)→ CH(3) and the reactions CH + H(2)→ CH(2) + H (abstraction), CH + H(2) (exchange) have been calculated in the 200-800 K temperature range, using the quasiclassical trajectory (QCT) method and the most recent global potential energy surface. The reactions, which are of interest in combustion and in astrochemistry, proceed via the formation of long-lived CH(3) collision complexes, and the three H atoms become equivalent. QCT rate coefficients for capture are in quite good agreement with experiments. However, an important zero point energy (ZPE) leakage problem occurs in the QCT calculations for the abstraction, exchange and inelastic exit channels. To account for this issue, a pragmatic but accurate approach has been applied, leading to a good agreement with experimental abstraction rate coefficients. Exchange rate coefficients have also been calculated using this approach. Finally, calculations employing QCT capture/phase space theory (PST) models have been carried out, leading to similar values for the abstraction rate coefficients as the QCT and previous quantum mechanical capture/PST methods. This suggests that QCT capture/PST models are a good alternative to the QCT method for this and similar systems.

  8. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †

    PubMed Central

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-01-01

    An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006

  9. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.

    PubMed

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-03-13

    An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less

  11. Multi-charge-state molecular dynamics and self-diffusion coefficient in the warm dense matter regime

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Hou, Yong; Kang, Dongdong; Gao, Cheng; Jin, Fengtao; Yuan, Jianmin

    2018-01-01

    We present a multi-ion molecular dynamics (MIMD) simulation and apply it to calculating the self-diffusion coefficients of ions with different charge-states in the warm dense matter (WDM) regime. First, the method is used for the self-consistent calculation of electron structures of different charge-state ions in the ion sphere, with the ion-sphere radii being determined by the plasma density and the ion charges. The ionic fraction is then obtained by solving the Saha equation, taking account of interactions among different charge-state ions in the system, and ion-ion pair potentials are computed using the modified Gordon-Kim method in the framework of temperature-dependent density functional theory on the basis of the electron structures. Finally, MIMD is used to calculate ionic self-diffusion coefficients from the velocity correlation function according to the Green-Kubo relation. A comparison with the results of the average-atom model shows that different statistical processes will influence the ionic diffusion coefficient in the WDM regime.

  12. Calculating the Optimum Angle of Filament-Wound Pipes in Natural Gas Transmission Pipelines Using Approximation Methods.

    PubMed

    Reza Khoshravan Azar, Mohammad; Emami Satellou, Ali Akbar; Shishesaz, Mohammad; Salavati, Bahram

    2013-04-01

    Given the increasing use of composite materials in various industries, oil and gas industry also requires that more attention should be paid to these materials. Furthermore, due to variation in choice of materials, the materials needed for the mechanical strength, resistance in critical situations such as fire, costs and other priorities of the analysis carried out on them and the most optimal for achieving certain goals, are introduced. In this study, we will try to introduce appropriate choice for use in the natural gas transmission composite pipelines. Following a 4-layered filament-wound (FW) composite pipe will consider an offer our analyses under internal pressure. The analyses' results will be calculated for different combinations of angles 15 deg, 30 deg, 45 deg, 55 deg, 60 deg, 75 deg, and 80 deg. Finally, we will compare the calculated values and the optimal angle will be gained by using the Approximation methods. It is explained that this layering is as the symmetrical.

  13. Modeling the Conformation-Specific Infrared Spectra of N-Alkylbenzenes

    NASA Astrophysics Data System (ADS)

    Tabor, Daniel P.; Sibert, Edwin; Hewett, Daniel M.; Korn, Joseph A.; Zwier, Timothy S.

    2016-06-01

    Conformation-specific UV-IR double resonance spectra are presented for n-alkylbenzenes. With the aid of a local mode Hamiltonian that includes the effects of stretch-bend Fermi coupling, the spectra of ethyl, n-propyl, and n-butylbenzene are assigned to individual conformers. These molecules allow for further development of the work on a first principles method for calculating alkyl stretch spectra. Due to the consistency of the anharmonic couplings from conformer to conformer, construction of the model Hamiltonian for a given conformer only requires a harmonic frequency calculation at the conformer's minimum geometry as an input. The model Hamiltonian can be parameterized with either density functional theory or MP2 electronic structure calculations. The relative strengths and weaknesses of these methods are evaluated, including their predictions of the relative energetics of the conformers. Finally, the IR spectra for conformers that have the alkyl chain bend back and interact with the π cloud of the benzene ring are modeled.

  14. Analysis of the Defect Structure of B2 Feal Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John; Noebe, Ronald D.; Amador, Carlos

    1995-01-01

    The Bozzolo, Ferrante and Smith (BFS) method for alloys is applied to the study of the defect structure of B2 FeAI alloys. First-principles Linear Muffin Tin Orbital calculations are used to determine the input parameters to the BFS method used in this work. The calculations successfully determine the phase field of the B2 structure, as well as the dependence with composition of the lattice parameter. Finally, the method is used to perform 'static' simulations where instead of determining the ground state configuration of the alloy with a certain concentration of vacancies, a large number of candidate ordered structures are studied and compared, in order to determine not only the lowest energy configurations but other possible metastable states as well. The results provide a description of the defect structure consistent with available experimental data. The simplicity of the BFS method also allows for a simple explanation of some of the essential features found in the concentration dependence of the heat of formation, lattice parameter and the defect structure.

  15. A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Liu, Ji-Gou

    2018-07-01

    This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.

  16. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Methods for estimating 2D cloud size distributions from 1D observations

    DOE PAGES

    Romps, David M.; Vogelmann, Andrew M.

    2017-08-04

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  18. Methods for estimating 2D cloud size distributions from 1D observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David M.; Vogelmann, Andrew M.

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  19. SU-G-BRB-05: Automation of the Photon Dosimetric Quality Assurance Program of a Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebron, S; Lu, B; Yan, G

    Purpose: To develop an automated method to calculate a linear accelerator (LINAC) photon radiation field size, flatness, symmetry, output and beam quality in a single delivery for flattened (FF) and flattening-filter-free (FFF) beams using an ionization chamber array. Methods: The proposed method consists of three control points that deliver 30×30, 10×10 and 5×5cm{sup 2} fields (FF or FFF) in a step-and-shoot sequence where the number of monitor units is weighted for each field size. The IC Profiler (Sun Nuclear Inc.) with 5mm detector spacing was used for this study. The corrected counts (CCs) were calculated and the locations of themore » maxima and minima values of the first-order gradient determined data of each sub field. Then, all CCs for each field size are summed in order to obtain the final profiles. For each profile, the radiation field size, symmetry, flatness, output factor and beam quality were calculated. For field size calculation, a parameterized gradient method was used. For method validation, profiles were collected in the detector array both, individually and as part of the step-and-shoot plan, with 9.9cm buildup for FF and FFF beams at 90cm source-to-surface distance. The same data were collected with the device (plus buildup) placed on a movable platform to achieve a 1mm resolution. Results: The differences between the dosimetric quantities calculated from both deliveries, individually and step-and-shoot, were within 0.31±0.20% and 0.04±0.02mm. The differences between the calculated field sizes with 5mm and 1mm resolution were ±0.1mm. Conclusion: The proposed single delivery method proved to be simple and efficient in automating the photon dosimetric monthly and annual quality assurance.« less

  20. Large-scale-system effectiveness analysis. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Foster, J.W.

    1979-11-01

    Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.

  1. Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.

    PubMed

    Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian

    2018-03-01

    We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.

  2. Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Li, Weiran; Zhu, Qing

    2018-04-01

    This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.

  3. Quantification of uncertainty in first-principles predicted mechanical properties of solids: Application to solid ion conductors

    NASA Astrophysics Data System (ADS)

    Ahmad, Zeeshan; Viswanathan, Venkatasubramanian

    2016-08-01

    Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.

  4. A molecule-centered method for accelerating the calculation of hydrodynamic interactions in Brownian dynamics simulations containing many flexible biomolecules

    PubMed Central

    Elcock, Adrian H.

    2013-01-01

    Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146

  5. Thermal noise calculation method for precise estimation of the signal-to-noise ratio of ultra-low-field MRI with an atomic magnetometer.

    PubMed

    Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo

    2012-02-01

    In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  7. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  8. Validation of an improved method to calculate the orientation and magnitude of pedicle screw bending moments.

    PubMed

    Freeman, Andrew L; Fahim, Mina S; Bechtold, Joan E

    2012-10-01

    Previous methods of pedicle screw strain measurement have utilized complex, time consuming methods of strain gauge application, experience high failure rates, do not effectively measure resultant bending moments, and cannot predict moment orientation. The purpose of this biomechanical study was to validate an improved method of quantifying pedicle screw bending moment orientation and magnitude. Pedicle screws were instrumented to measure biplanar screw bending moments by positioning four strain gauges on flat, machined surfaces below the screw head. Screws were calibrated to measure bending moments by hanging certified weights a known distance from the strain gauges. Loads were applied in 30 deg increments at 12 different angles while recording data from two independent strain channels. The data were then analyzed to calculate the predicted orientation and magnitude of the resultant bending moment. Finally, flexibility tests were performed on a cadaveric motion segment implanted with the instrumented screws to demonstrate the implementation of this technique. The difference between the applied and calculated orientation of the bending moments averaged (±standard error of the mean (SEM)) 0.3 ± 0.1 deg across the four screws for all rotations and loading conditions. The calculated resultant bending moments deviated from the actual magnitudes by an average of 0.00 ± 0.00 Nm for all loading conditions. During cadaveric testing, the bending moment orientations were medial/lateral in flexion-extension, variable in lateral bending, and diagonal in axial torsion. The technique developed in this study provides an accurate method of calculating the orientation and magnitude of screw bending moments and can be utilized with any pedicle screw fixation system.

  9. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  11. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  12. Calculating the trap density of states in organic field-effect transistors from experiment: A comparison of different methods

    NASA Astrophysics Data System (ADS)

    Kalb, Wolfgang L.; Batlogg, Bertram

    2010-01-01

    The spectral density of localized states in the band gap of pentacene (trap DOS) was determined with a pentacene-based thin-film transistor from measurements of the temperature dependence and gate-voltage dependence of the contact-corrected field-effect conductivity. Several analytical methods to calculate the trap DOS from the measured data were used to clarify, if the different methods lead to comparable results. We also used computer simulations to further test the results from the analytical methods. Most methods predict a trap DOS close to the valence-band edge that can be very well approximated by a single exponential function with a slope in the range of 50-60 meV and a trap density at the valence-band edge of ≈2×1021eV-1cm-3 . Interestingly, the trap DOS is always slightly steeper than exponential. An important finding is that the choice of the method to calculate the trap DOS from the measured data can have a considerable effect on the final result. We identify two specific simplifying assumptions that lead to significant errors in the trap DOS. The temperature dependence of the band mobility should generally not be neglected. Moreover, the assumption of a constant effective accumulation-layer thickness leads to a significant underestimation of the slope of the trap DOS.

  13. Customer loads of two-wheeled vehicles

    NASA Astrophysics Data System (ADS)

    Gorges, C.; Öztürk, K.; Liebich, R.

    2017-12-01

    Customer usage profiles are the most unknown influences in vehicle design targets and they play an important role in durability analysis. This publication presents a customer load acquisition system for two-wheeled vehicles that utilises the vehicle's onboard signals. A road slope estimator was developed to reveal the unknown slope resistance force with the help of a linear Kalman filter. Furthermore, an automated mass estimator was developed to consider the correct vehicle loading. The mass estimation is performed by an extended Kalman filter. Finally, a model-based wheel force calculation was derived, which is based on the superposition of forces calculated from measured onboard signals. The calculated wheel forces were validated by measurements with wheel-load transducers through the comparison of rainflow matrices. The calculated wheel forces correspond with the measured wheel forces in terms of both quality and quantity. The proposed methods can be used to gather field data for improved vehicle design loads.

  14. Sixth-order wave aberration theory of ultrawide-angle optical systems.

    PubMed

    Lu, Lijun; Cao, Yiqing

    2017-10-20

    In this paper, we develop sixth-order wave aberration theory of ultrawide-angle optical systems like fisheye lenses. Based on the concept and approach to develop wave aberration theory of plane-symmetric optical systems, we first derive the sixth-order intrinsic wave aberrations and the fifth-order ray aberrations; second, we present a method to calculate the pupil aberration of such kind of optical systems to develop the extrinsic aberrations; third, the relation of aperture-ray coordinates between adjacent optical surfaces is fitted with the second-order polynomial to improve the calculation accuracy of the wave aberrations of a fisheye lens with a large acceptance aperture. Finally, the resultant aberration expressions are applied to calculate the aberrations of two design examples of fisheye lenses; the calculation results are compared with the ray-tracing ones with Zemax software to validate the aberration expressions.

  15. 77 FR 21529 - Freshwater Crawfish Tail Meat From the People's Republic of China: Final Results of Antidumping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... question, including when that rate is zero or de minimis.\\5\\ In this case, there is only one non-selected... calculations for one company. Therefore, the final results differ from the preliminary results. The final... not to calculate an all-others rate using any zero or de minimis margins or any margins based entirely...

  16. Advanced Digital Signal Processing for Hybrid Lidar

    DTIC Science & Technology

    2014-09-30

    with a PC running LabVIEW performing the final calculations to obtain range measurements . A MATLAB- based system developed at Clarkson University in...the image contrast and resolution as well as the object ranging measurement accuracy. There have been various methods that attempt to reduce the...high speed modulation to help suppress backscatter while also providing an unambiguous range measurement . In general, it is desired to determine which

  17. Reactive Collisions and Final State Analysis in Hypersonic Flight Regime

    DTIC Science & Technology

    2016-09-13

    Kelvin.[7] The gas-phase, surface reactions and energy transfer at these tempera- tures are essentially uncharacterized and the experimental methodologies...high temperatures (1000 to 20000 K) and compared with results from experimentally derived thermodynamics quantities from the NASA CEA (NASA Chemical...with a reproducing kernel Hilbert space (RKHS) method[13] combined with Legendre polynomials; (2) quasi classical trajectory (QCT) calculations to study

  18. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  19. Feature-based Approach in Product Design with Energy Efficiency Consideration

    NASA Astrophysics Data System (ADS)

    Li, D. D.; Zhang, Y. J.

    2017-10-01

    In this paper, a method to measure the energy efficiency and ecological footprint metrics of features is proposed for product design. First the energy consumption models of various manufacturing features, like cutting feature, welding feature, etc. are studied. Then, the total energy consumption of a product is modeled and estimated according to its features. Finally, feature chains that combined by several sequence features based on the producing operation orders are defined and analyzed to calculate global optimal solution. The corresponding assessment model is also proposed to estimate their energy efficiency and ecological footprint. Finally, an example is given to validate the proposed approach in the improvement of sustainability.

  20. [Fluorescent signal detection of chromatographic chip by algorithms of pyramid connection and Gaussian mixture model].

    PubMed

    Hu, Beibei; Zhang, Xueqing; Chen, Haopeng; Cui, Daxiang

    2011-03-01

    We proposed a new algorithm for automatic identification of fluorescent signal. Based on the features of chromatographic chips, mathematic morphology in RGB color space was used to filter and enhance the images, pyramid connection was used to segment the areas of fluorescent signal, and then the method of Gaussian Mixture Model was used to detect the fluorescent signal. Finally we calculated the average fluorescent intensity in obtained fluorescent areas. Our results show that the algorithm has a good efficacy to segment the fluorescent areas, can detect the fluorescent signal quickly and accurately, and finally realize the quantitative detection of fluorescent signal in chromatographic chip.

  1. N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method

    NASA Astrophysics Data System (ADS)

    Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.

    2018-05-01

    Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.

  2. Robust sleep quality quantification method for a personal handheld device.

    PubMed

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  3. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    NASA Astrophysics Data System (ADS)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  4. The Momentum Distribution of Liquid ⁴He

    DOE PAGES

    Prisk, T. R.; Bryan, M. S.; Sokol, P. E.; ...

    2017-07-24

    We report a high-resolution neutron Compton scattering study of liquid ⁴He under milli-Kelvin temperature control. To interpret the scattering data, we performed Quantum Monte Carlo calculations of the atomic momentum distribution and final state effects for the conditions of temperature and density considered in the experiment. There is excellent agreement between the observed scattering and ab initio calculations of its lineshape at all temperatures. We also used model fit functions to obtain from the scattering data empirical estimates of the average atomic kinetic energy and Bose condensate fraction. These quantities are also in excellent agreement with ab initio calculations. Wemore » conclude that contemporary Quantum Monte Carlo methods can furnish accurate predictions for the properties of Bose liquids, including the condensate fraction, close to the superfluid transition temperature.« less

  5. Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C G; Ong, M M; Perkins, M P

    2010-02-12

    Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less

  6. Risk factors for poor outcomes in patients with open-globe injuries

    PubMed Central

    Page, Rita D; Gupta, Sumeet K; Jenkins, Thomas L; Karcioglu, Zeynel A

    2016-01-01

    Purpose The aim of this study was to identify the risk factors that are predictive of poor outcomes in penetrating globe trauma. Patients and methods This retrospective case series evaluated 103 eyes that had been surgically treated for an open-globe injury from 2007 to 2010 at the eye clinic of the University of Virginia. A total of 64 eyes with complete medical records and at least 6 months of follow-up were included in the study. Four risk factors (preoperative best-corrected visual acuity [pre-op BCVA], ocular trauma score [OTS], zone of injury [ZOI], and time lapse [TL] between injury and primary repair) and three outcomes (final BCVA, monthly rate of additional surgeries [MRAS], and enucleation) were identified for analysis. Results Pre-op BCVA was positively associated with MRAS, final BCVA, and enucleation. Calculated OTS was negatively associated with the outcome variables. No association was found between TL and ZOI with the outcome variables. Further, age and predictor variable-adjusted analyses showed pre-op BCVA to be independently positively associated with MRAS (P=0.008) and with final BCVA (P<0.001), while the calculated OTS was independently negatively associated with final BCVA (P<0.001), but not uniquely associated with MRAS (P=0.530). Conclusion Pre-op BCVA and OTS are best correlated with prognosis in open-globe injuries. However, no conventional features reliably predict the outcome of traumatized eyes. PMID:27536059

  7. Rotorcraft acoustic radiation prediction based on a refined blade-vortex interaction model

    NASA Astrophysics Data System (ADS)

    Rule, John Allen

    1997-08-01

    The analysis of rotorcraft aerodynamics and acoustics is a challenging problem, primarily due to the fact that a rotorcraft continually flies through its own wake. The generation mechanism for a rotorcraft wake, which is dominated by strong, concentrated blade-tip trailing vortices, is similar to that in fixed wing aerodynamics. However, following blades encounter shed vortices from previous blades before they are swept downstream, resulting in sharp, impulsive loading on the blades. The blade/wake encounter, known as Blade-Vortex Interaction, or BVI, is responsible for a significant amount of vibratory loading and the characteristic rotorcraft acoustic signature in certain flight regimes. The present work addressed three different aspects of this interaction at a fundamental level. First, an analytical model for the prediction of trailing vortex structure is discussed. The model as presented is the culmination of a lengthy research effort to isolate the key physical mechanisms which govern vortex sheet rollup. Based on the Betz model, properties of the flow such as mass flux, axial momentum flux, and axial flux of angular momentum are conserved on either a differential or integral basis during the rollup process. The formation of a viscous central core was facilitated by the assumption of a turbulent mixing process with final vortex velocity profiles chosen to be consistent with a rotational flow mixing model and experimental observation. A general derivation of the method is outlined, followed by a comparison of model predictions with experimental vortex measurements, and finally a viscous blade drag model to account for additional effects of aerodynamic drag on vortex structure. The second phase of this program involved the development of a new formulation of lifting surface theory with the ultimate goal of an accurate, reduced order hybrid analytical/numerical model for fast rotorcraft load calculations. Currently, accurate rotorcraft airload analyses are limited by the massive computational power required to capture the small time scale events associated with BVI. This problem has two primary facets: accurate knowledge of the wake geometry, and accurate resolution of the impulsive loading imposed by a tip vortex on a blade. The present work addressed the second facet, providing a mathematical framework for solving the impulsive loading problem analytically, then asymptotically matching this solution to a low-resolution numerical calculation. A method was developed which uses continuous sheets of integrated boundary elements to model the lifting surface and wake. Special elements were developed to capture local behavior in high-gradient regions of the flow, thereby reducing the burden placed on the surrounding numerical method. Unsteady calculations for several classical cases were made in both frequency and time domain to demonstrate the performance of the method. Finally, a new unsteady, compressible boundary element method was applied to the problem of BVI acoustic radiation prediction. This numerical method, combined with the viscous core trailing vortex model, was used to duplicate the geometry and flight configuration of a detailed experimental BVI study carried out at NASA Ames Research Center. Blade surface pressure and near- and far-field acoustic radiation calculations were made. All calculations were shown to compare favorably with experimentally measured values. The linear boundary element method with non-linear corrections proved sufficient over most of the rotor azimuth, and particular in the region of the blade vortex interaction, suggesting that full non-linear CFD schemes are not necessary for rotorcraft noise prediction.

  8. Binary-Phase Fourier Gratings for Nonuniform Array Generation

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.; Crow, Robert W.; Ashley, Paul R.

    2003-01-01

    We describe a design method for a binary-phase Fourier grating that generates an array of spots with nonuniform, user-defined intensities symmetric about the zeroth order. Like the Dammann fanout grating approach, the binary-phase Fourier grating uses only two phase levels in its grating surface profile to generate the final spot array. Unlike the Dammann fanout grating approach, this method allows for the generation of nonuniform, user-defined intensities within the final fanout pattern. Restrictions governing the specification and realization of the array's individual spot intensities are discussed. Design methods used to realize the grating employ both simulated annealing and nonlinear optimization approaches to locate optimal solutions to the grating design problem. The end-use application driving this development operates in the near- to mid-infrared spectrum - allowing for higher resolution in grating specification and fabrication with respect to wavelength than may be available in visible spectrum applications. Fabrication of a grating generating a user-defined nine spot pattern is accomplished in GaAs for the near-infrared. Characterization of the grating is provided through the measurement of individual spot intensities, array uniformity, and overall efficiency. Final measurements are compared to calculated values with a discussion of the results.

  9. Fully automated calculation of cardiothoracic ratio in digital chest radiographs

    NASA Astrophysics Data System (ADS)

    Cong, Lin; Jiang, Luan; Chen, Gang; Li, Qiang

    2017-03-01

    The calculation of Cardiothoracic Ratio (CTR) in digital chest radiographs would be useful for cardiac anomaly assessment and heart enlargement related disease indication. The purpose of this study was to develop and evaluate a fully automated scheme for calculation of CTR in digital chest radiographs. Our automated method consisted of three steps, i.e., lung region localization, lung segmentation, and CTR calculation. We manually annotated the lung boundary with 84 points in 100 digital chest radiographs, and calculated an average lung model for the subsequent work. Firstly, in order to localize the lung region, generalized Hough transform was employed to identify the upper, lower, and outer boundaries of lung by use of Sobel gradient information. The average lung model was aligned to the localized lung region to obtain the initial lung outline. Secondly, we separately applied dynamic programming method to detect the upper, lower, outer and inner boundaries of lungs, and then linked the four boundaries to segment the lungs. Based on the identified outer boundaries of left lung and right lung, we corrected the center and the declination of the original radiography. Finally, CTR was calculated as a ratio of the transverse diameter of the heart to the internal diameter of the chest, based on the segmented lungs. The preliminary results on 106 digital chest radiographs showed that the proposed method could obtain accurate segmentation of lung based on subjective observation, and achieved sensitivity of 88.9% (40 of 45 abnormalities), and specificity of 100% (i.e. 61 of 61 normal) for the identification of heart enlargements.

  10. Method and system for measuring multiphase flow using multiple pressure differentials

    DOEpatents

    Fincke, James R.

    2001-01-01

    An improved method and system for measuring a multiphase flow in a pressure flow meter. An extended throat venturi is used and pressure of the multiphase flow is measured at three or more positions in the venturi, which define two or more pressure differentials in the flow conduit. The differential pressures are then used to calculate the mass flow of the gas phase, the total mass flow, and the liquid phase. The method for determining the mass flow of the high void fraction fluid flow and the gas flow includes certain steps. The first step is calculating a gas density for the gas flow. The next two steps are finding a normalized gas mass flow rate through the venturi and computing a gas mass flow rate. The following step is estimating the gas velocity in the venturi tube throat. The next step is calculating the pressure drop experienced by the gas-phase due to work performed by the gas phase in accelerating the liquid phase between the upstream pressure measuring point and the pressure measuring point in the venturi throat. Another step is estimating the liquid velocity in the venturi throat using the calculated pressure drop experienced by the gas-phase due to work performed by the gas phase. Then the friction is computed between the liquid phase and a wall in the venturi tube. Finally, the total mass flow rate based on measured pressure in the venturi throat is calculated, and the mass flow rate of the liquid phase is calculated from the difference of the total mass flow rate and the gas mass flow rate.

  11. Neutron spectrometry in a mixed field of neutrons and protons with a phoswich neutron detector Part I: response functions for photons and neutrons of the phoswich neutron detector

    NASA Astrophysics Data System (ADS)

    Takada, M.; Taniguchi, S.; Nakamura, T.; Nakao, N.; Uwamino, Y.; Shibata, T.; Fujitaka, K.

    2001-06-01

    We have developed a phoswich neutron detector consisting of an NE213 liquid scintillator surrounded by an NE115 plastic scintillator to distinguish photon and neutron events in a charged-particle mixed field. To obtain the energy spectra by unfolding, the response functions to neutrons and photons were obtained by the experiment and calculation. The response functions to photons were measured with radionuclide sources, and were calculated with the EGS4-PRESTA code. The response functions to neutrons were measured with a white neutron source produced by the bombardment of 135 MeV protons onto a Be+C target using a TOF method, and were calculated with the SCINFUL code, which we revised in order to calculate neutron response functions up to 135 MeV. Based on these experimental and calculated results, response matrices for photons up to 20 MeV and neutrons up to 132 MeV could finally be obtained.

  12. Computation of multi-dimensional viscous supersonic flow

    NASA Technical Reports Server (NTRS)

    Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.

    1986-01-01

    A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.

  13. A modified 3D algorithm for road traffic noise attenuation calculations in large urban areas.

    PubMed

    Wang, Haibo; Cai, Ming; Yao, Yifan

    2017-07-01

    The primary objective of this study is the development and application of a 3D road traffic noise attenuation calculation algorithm. First, the traditional empirical method does not address problems caused by non-direct occlusion by buildings and the different building heights. In contrast, this study considers the volume ratio of the buildings and the area ratio of the projection of buildings adjacent to the road. The influence of the ground affection is analyzed. The insertion loss due to barriers (infinite length and finite barriers) is also synthesized in the algorithm. Second, the impact of different road segmentation is analyzed. Through the case of Pearl River New Town, it is recommended that 5° is the most appropriate scanning angle as the computational time is acceptable and the average error is approximately 3.1 dB. In addition, the algorithm requires only 1/17 of the time that the beam tracking method requires at the cost of more imprecise calculation results. Finally, the noise calculation for a large urban area with a high density of buildings shows the feasibility of the 3D noise attenuation calculation algorithm. The algorithm is expected to be applied in projects requiring large area noise simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. K to π π decay amplitudes from lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, T.; Boyle, P. A.; Christ, N. H.

    2011-12-01

    We report a direct lattice calculation of the K to ππ decay matrix elements for both the ΔI=1/2 and 3/2 amplitudes A 0 and A 2 on 2+1 flavor, domain wall fermion, 16 3×32×16 lattices. This is a complete calculation in which all contractions for the required ten, four-quark operators are evaluated, including the disconnected graphs in which no quark line connects the initial kaon and final two-pion states. These lattice operators are nonperturbatively renormalized using the Rome-Southampton method and the quadratic divergences are studied and removed. This is an important but notoriously difficult calculation, requiring high statistics on amore » large volume. In this paper, we take a major step toward the computation of the physical K→ππ amplitudes by performing a complete calculation at unphysical kinematics with pions of mass 422 MeV at rest in the kaon rest frame. With this simplification, we are able to resolve Re(A 0) from zero for the first time, with a 25% statistical error and can develop and evaluate methods for computing the complete, complex amplitude A 0, a calculation central to understanding the Δ=1/2 rule and testing the standard model of CP violation in the kaon system.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  16. Determining entire mean first-passage time for Cayley networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqian; Dai, Meifeng; Chen, Yufei; Zong, Yue; Sun, Yu; Su, Weiyi

    In this paper, we consider the entire mean first-passage time (EMFPT) with random walks for Cayley networks. We use Laplacian spectra to calculate the EMFPT. Firstly, we calculate the constant term and monomial coefficient of characteristic polynomial. By using the Vieta theorem, we then obtain the sum of reciprocals of all nonzero eigenvalues of Laplacian matrix. Finally, we obtain the scaling of the EMFPT for Cayley networks by using the relationship between the sum of reciprocals of all nonzero eigenvalues of Laplacian matrix and the EMFPT. We expect that our method can be adapted to other types of self-similar networks, such as vicsek networks, polymer networks.

  17. Prediction of SA 349/2 GV blade loads in high speed flight using several rotor analyses

    NASA Technical Reports Server (NTRS)

    Gaubert, Michel; Yamauchi, Gloria K.

    1987-01-01

    The influence of blade dynamics, dynamic stall, and transonic aerodynamics on the predictions of rotor loads in high-speed flight are presented. Data were obtained from an Aerospatiale Gazelle SA 349/2 helicopter with three Grande Vitesse blades. Several analyses are used for this investigation. First, blade dynamics effects on the correlation are studied using three rotor analyses which differ mainly in the method of calculating the blade elastic response. Next, an ONERA dynamic stall model is used to predict retreating blade stall. Finally, advancing blade aerodynamic loads are calculated using a NASA-developed rotorcraft analysis coupled with two transonic finite-difference analyses.

  18. Binding free energy calculations between bovine β-lactoglobulin and four fatty acids using the MMGBSA method.

    PubMed

    Bello, Martiniano

    2014-10-01

    The bovine dairy protein β-lactoglobulin (βlg) is a promiscuous protein that has the ability to bind several hydrophobic ligands. In this study, based on known experimental data, the dynamic interaction mechanism between bovine βlg and four fatty acids was investigated by a protocol combining molecular dynamics (MD) simulations and molecular mechanics generalized Born surface area (MMGBSA) binding free energy calculations. Energetic analyses revealed binding free energy trends that corroborated known experimental findings; larger ligand size corresponded to greater binding affinity. Finally, binding free energy decomposition provided detailed information about the key residues stabilizing the complex. © 2014 Wiley Periodicals, Inc.

  19. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  20. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE PAGES

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; ...

    2018-04-26

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  1. LETTER TO THE EDITOR: Iteratively-coupled propagating exterior complex scaling method for electron hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor

    2004-02-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.

  2. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  3. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  4. Quick estimate of oil discovery from gas-condensate reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarem, A.M.

    1966-10-24

    A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less

  5. Time-Spectral Rotorcraft Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua I.; Murman, Scott M.; Pulliam, Thomas H.

    2014-01-01

    The Time-Spectral method is derived as a Fourier collocation scheme and applied to NASA's overset Reynolds-averaged Navier-Stokes (RANS) solver OVERFLOW. The paper outlines the Time-Spectral OVERFLOWimplementation. Successful low-speed laminar plunging NACA 0012 airfoil simulations demonstrate the capability of the Time-Spectral method to resolve the highly-vortical wakes typical of more expensive three-dimensional rotorcraft configurations. Dealiasing, in the form of spectral vanishing viscosity (SVV), facilitates the convergence of Time-Spectral calculations of high-frequency flows. Finally, simulations of the isolated V-22 Osprey tiltrotor for both hover and forward (edgewise) flight validate the three-dimensional Time-Spectral OVERFLOW implementation. The Time-Spectral hover simulation matches the time-accurate calculation using a single harmonic. Significantly more temporal modes and SVV are required to accurately compute the forward flight case because of its more active, high-frequency wake.

  6. Real-time driver fatigue detection based on face alignment

    NASA Astrophysics Data System (ADS)

    Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi

    2017-07-01

    The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.

  7. Osm-Oriented Method of Multimodal Route Planning

    NASA Astrophysics Data System (ADS)

    Li, X.; Wu, Q.; Chen, L.; Xiong, W.; Jing, N.

    2015-07-01

    With the increasing pervasiveness of basic facilitate of transportation and information, the need of multimodal route planning is becoming more essential in the fields of communication and transportation, urban planning, logistics management, etc. This article mainly described an OSM-oriented method of multimodal route planning. Firstly, it introduced how to extract the information we need from OSM data and build proper network model and storage model; then it analysed the accustomed cost standard adopted by most travellers; finally, we used shortest path algorithm to calculate the best route with multiple traffic means.

  8. Study on combat effectiveness of air defense missile weapon system based on queuing theory

    NASA Astrophysics Data System (ADS)

    Zhao, Z. Q.; Hao, J. X.; Li, L. J.

    2017-01-01

    Queuing Theory is a method to analyze the combat effectiveness of air defense missile weapon system. The model of service probability based on the queuing theory was constructed, and applied to analyzing the combat effectiveness of "Sidewinder" and "Tor-M1" air defense missile weapon system. Finally aimed at different targets densities, the combat effectiveness of different combat units of two types' defense missile weapon system is calculated. This method can be used to analyze the usefulness of air defense missile weapon system.

  9. Electronic response to nuclear breathing mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludwig, Hendrik; Ruffini, Remo; ICRANet, University of Nice-Sophia Antipolis, 28 Av. de Valrose, 06103 Nice Cedex 2

    2015-12-17

    Based on our previous work on stationary oscillation modes of electrons around giant nuclei, we show how to treat a general driving force on the electron gas, such as the one generated by the breathing mode of the nucleus, by means of the spectral method. As an example we demonstrate this method for a system with Z = 10{sup 4} in β-equilibrium with the electrons compressed up to the nuclear radius. In this case the stationary modes can be obtained analytically, which allows for a very speedy numerical calculation of the final result.

  10. Novel analytical method to measure formaldehyde release from heated hair straightening cosmetic products: Impact on risk assessment.

    PubMed

    Galli, Corrado Lodovico; Bettin, Federico; Metra, Pierre; Fidente, Paola; De Dominicis, Emiliano; Marinovich, Marina

    2015-08-01

    Hair straightening cosmetic products may contain formaldehyde (FA). In Europe, FA is permitted for use in personal care products at concentrations ⩽ 0.2g/100g. According to the Cosmetic Ingredient Review (CIR) Expert Panel products are safe when formalin (a 37% saturated solution of FA in water) concentration does not exceed 0.2g/100g (0.074 g/100g calculated as FA). The official method of reference does not discriminate between "free" FA and FA released into the air after heating FA donors. The method presented here captures and collects the FA released into the air from heated cosmetic products by derivatization with 2,4-dinitrophenylhydrazine and final analysis by UPLC/DAD instrument. Reliable data in terms of linearity, recovery, repeatability and sensitivity are obtained. On a total of 72 market cosmetic products analyzed, 42% showed FA concentrations very close to or above the threshold value (0.074 g/100g calculated as FA) suggested by the Cosmetic Ingredient Review committee, whereas 11 products, negative using the official method of reference, were close to or above the threshold value (0.074 g/100g calculated as FA). This may pose a health problem for occasional users and professional hair stylists. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Evaluating Sustainability of Cropland Use in Yuanzhou County of the Loess Plateau, China Using an Emergy-Based Ecological Footprint

    PubMed Central

    Bai, Xiaomei; Wen, Zhongming; An, Shaoshan; Li, Bicheng

    2015-01-01

    Evaluating the sustainability of cropland use is essential for guaranteeing a secure food supply and accomplishing agriculture sustainable development. This study was conducted in the ecologically vulnerable Loess Plateau region of China to evaluate the sustainability of cropland use based on an ecological footprint model that integrates emergy analysis. One modified method proposed in 2005 is known as the emergetic ecological footprint (EEF). We enhanced the method by accounting for both the surface soil energy in the carrying capacity calculation and the net topsoil loss for human consumption in the EF calculation. This paper evaluates whether the cropland of the study area was overloaded or sustainably managed during the period from 1981 to 2009. Toward this end, the final results obtained from EEF were compared to conventional EF and previous methods. The results showed that the cropland of Yuanzhou County has not been used sustainably since 1983, and the conventional EF analysis provided similar results. In contrast, a deficit did not appear during this time period when previous calculation methods of others were used. Additionally, the ecological sustainable index (ESI) from three models indicated that the recently used cropland system is unlikely to be unsustainable. PMID:25738289

  12. Study of motion of optimal bodies in the soil of grid method

    NASA Astrophysics Data System (ADS)

    Kotov, V. L.; Linnik, E. Yu

    2016-11-01

    The paper presents a method of calculating the optimum forms in axisymmetric numerical method based on the Godunov and models elastoplastic soil vedium Grigoryan. Solved two problems in a certain definition of generetrix rotation of the body of a given length and radius of the base, having a minimum impedance and maximum penetration depth. Numerical calculations are carried out by a modified method of local variations, which allows to significantly reduce the number of operations at different representations of generetrix. Significantly simplify the process of searching for optimal body allows the use of a quadratic model of local interaction for preliminary assessments. It is noted the qualitative similarity of the process of convergence of numerical calculations for solving the optimization problem based on local interaction model and within the of continuum mechanics. A comparison of the optimal bodies with absolutely optimal bodies possessing the minimum resistance of penetration below which is impossible to achieve under given constraints on the geometry. It is shown that the conical striker with a variable vertex angle, which equal to the angle of the solution is absolutely optimal body of minimum resistance of penetration for each value of the velocity of implementation will have a final depth of penetration is only 12% more than the traditional body absolutely optimal maximum depth penetration.

  13. Crack Propagation Calculations for Optical Fibers under Static Bending and Tensile Loads Using Continuum Damage Mechanics

    PubMed Central

    Chen, Yunxia; Cui, Yuxuan; Gong, Wenjun

    2017-01-01

    Static fatigue behavior is the main failure mode of optical fibers applied in sensors. In this paper, a computational framework based on continuum damage mechanics (CDM) is presented to calculate the crack propagation process and failure time of optical fibers subjected to static bending and tensile loads. For this purpose, the static fatigue crack propagation in the glass core of the optical fiber is studied. Combining a finite element method (FEM), we use the continuum damage mechanics for the glass core to calculate the crack propagation path and corresponding failure time. In addition, three factors including bending radius, tensile force and optical fiber diameter are investigated to find their impacts on the crack propagation process and failure time of the optical fiber under concerned situations. Finally, experiments are conducted and the results verify the correctness of the simulation calculation. It is believed that the proposed method could give a straightforward description of the crack propagation path in the inner glass core. Additionally, the predicted crack propagation time of the optical fiber with different factors can provide effective suggestions for improving the long-term usage of optical fibers. PMID:29140284

  14. Theoretical investigation on the molecular structure, Infrared, Raman and NMR spectra of para-halogen benzenesulfonamides, 4-X-C 6H 4SO 2NH 2 (X = Cl, Br or F)

    NASA Astrophysics Data System (ADS)

    Karabacak, Mehmet; Çınar, Mehmet; Çoruh, Ali; Kurt, Mustafa

    2009-02-01

    In the present study, the structural properties of para-halogen benzenesulfonamides, 4-XC 6H 4SO 2NH 2 (4-chlorobenzenesulfonamide (I), 4-bromobenzenesulfonamide (II) and 4-fluorobenzenesulfonamide (III)) have been studied extensively utilizing ab initio Hartree-Fock (HF) and density functional theory (DFT) employing B3LYP exchange correlation. The vibrational frequencies were calculated and scaled values were compared with experimental values. The complete assignments were performed on the basis of the total energy distribution (TED) of the vibrational modes, calculated with scaled quantum mechanics (SQM) method. The effects of the halogen substituent on the characteristic benzenesulfonamides bands in the spectra are discussed. The 1H and 13C nuclear magnetic resonance (NMR) chemical shifts of the molecules were calculated using the Gauge-Invariant Atomic Orbital (GIAO) method. Finally, geometric parameters, vibrational bands and chemical shifts were compared with available experimental data of the molecules. The fully optimized geometries of the molecules were found to be consistent with the X-ray crystal structures. The observed and calculated frequencies and chemical shifts were found to be in very good agreement.

  15. Crack Propagation Calculations for Optical Fibers under Static Bending and Tensile Loads Using Continuum Damage Mechanics.

    PubMed

    Chen, Yunxia; Cui, Yuxuan; Gong, Wenjun

    2017-11-15

    Static fatigue behavior is the main failure mode of optical fibers applied in sensors. In this paper, a computational framework based on continuum damage mechanics (CDM) is presented to calculate the crack propagation process and failure time of optical fibers subjected to static bending and tensile loads. For this purpose, the static fatigue crack propagation in the glass core of the optical fiber is studied. Combining a finite element method (FEM), we use the continuum damage mechanics for the glass core to calculate the crack propagation path and corresponding failure time. In addition, three factors including bending radius, tensile force and optical fiber diameter are investigated to find their impacts on the crack propagation process and failure time of the optical fiber under concerned situations. Finally, experiments are conducted and the results verify the correctness of the simulation calculation. It is believed that the proposed method could give a straightforward description of the crack propagation path in the inner glass core. Additionally, the predicted crack propagation time of the optical fiber with different factors can provide effective suggestions for improving the long-term usage of optical fibers.

  16. Study on Power Loss Reduction Considering Load Variation with Large Penetration of Distributed Generation in Smart Grid

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Lv, Xiangyu; Guo, Li; Cai, Lixia; Jie, Jinxing; Su, Kuo

    2017-05-01

    With the increasing of penetration of distributed in the smart grid, the problems that the power loss increasing and short circuit capacity beyond the rated capicity of circuit breaker will become more serious. In this paper, a methodology (Modified BPSO) is presented for network reconfiguration which is based on hybrid approach of Tabu Search and BPSO algorithms to prevent the local convergence and to decrease the calculation time using double fitnesses to consider the constraints. Moreover, an average load simulated method (ALS method) load variation considered is proposed that the average load value is used to instead of the actual load to calculation. Finally, from a case study, the results of simulation certify the approaches will decrease drastically the losses and improve the voltage profiles obviously, at the same time, the short circuit capacity is also decreased into less the shut-off capacity of circuit breaker. The power losses won’t be increased too much even if the short circuit capacity constraint is considered; voltage profiles are better with the constraint of short circuit capacity considering. The ALS method is simple and calculated time is speed.

  17. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  18. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  19. A project based on multi-configuration Dirac-Fock calculations for plasma spectroscopy

    NASA Astrophysics Data System (ADS)

    Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.

    2017-09-01

    We present a project dedicated to hot plasma spectroscopy based on a Multi-Configuration Dirac-Fock (MCDF) code, initially developed by J. Bruneau. The code is briefly described and the use of the transition state method for plasma spectroscopy is detailed. Then an opacity code for local-thermodynamic-equilibrium plasmas using MCDF data, named OPAMCDF, is presented. Transition arrays for which the number of lines is too large to be handled in a Detailed Line Accounting (DLA) calculation can be modeled within the Partially Resolved Transition Array method or using the Unresolved Transition Arrays formalism in jj-coupling. An improvement of the original Partially Resolved Transition Array method is presented which gives a better agreement with DLA computations. Comparisons with some absorption and emission experimental spectra are shown. Finally, the capability of the MCDF code to compute atomic data required for collisional-radiative modeling of plasma at non local thermodynamic equilibrium is illustrated. In addition to photoexcitation, this code can be used to calculate photoionization, electron impact excitation and ionization cross-sections as well as autoionization rates in the Distorted-Wave or Close Coupling approximations. Comparisons with cross-sections and rates available in the literature are discussed.

  20. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  1. Optimization of radial-type superconducting magnetic bearing using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi

    2018-07-01

    It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.

  2. DFT computational analysis of piracetam

    NASA Astrophysics Data System (ADS)

    Rajesh, P.; Gunasekaran, S.; Seshadri, S.; Gnanasambandan, T.

    2014-11-01

    Density functional theory calculation with B3LYP using 6-31G(d,p) and 6-31++G(d,p) basis set have been used to determine ground state molecular geometries. The first order hyperpolarizability (β0) and related properties (β, α0 and Δα) of piracetam is calculated using B3LYP/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO/NLMO analysis. The calculation of first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. Molecular electrostatic potential (MEP) at a point in the space around a molecule gives an indication of the net electrostatic effect produced at that point by the total charge distribution of the molecule. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charge is also calculated. Because of vibrational analysis, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-Vis spectra and electronic absorption properties are explained and illustrated from the frontier molecular orbitals.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Herk; Poerschke, A.; Beach, R.

    In 2012-2013, IBACOS worked with a builder, Brookfield Homes in Denver, Colorado, to design and construct a Passive House certified model home. IBACOS used several modeling programs and calculation methods to complete the final design package along with Brookfield's architect KGA Studio. This design package included upgrades to the thermal enclosure, basement insulation, windows, and heating, ventilation, and air conditioning. Short-term performance testing in the Passive House was done during construction and after construction.

  4. Irreducible correlation functions of the S matrix in the coordinate representation: application in calculating Lorentzian half-widths and shifts.

    PubMed

    Ma, Q; Tipping, R H; Boulet, C

    2006-01-07

    By introducing the coordinate representation, the derivation of the perturbation expansion of the Liouville S matrix is formulated in terms of classically behaved autocorrelation functions. Because these functions are characterized by a pair of irreducible tensors, their number is limited to a few. They represent how the overlaps of the potential components change with a time displacement, and under normal conditions, their magnitudes decrease by several orders of magnitude when the displacement reaches several picoseconds. The correlation functions contain all dynamical information of the collision processes necessary in calculating half-widths and shifts and can be easily derived with high accuracy. Their well-behaved profiles, especially the rapid decrease of the magnitude, enables one to transform easily the dynamical information contained in them from the time domain to the frequency domain. More specifically, because these correlation functions are well time limited, their continuous Fourier transforms should be band limited. Then, the latter can be accurately replaced by discrete Fourier transforms and calculated with a standard fast Fourier transform method. Besides, one can easily calculate their Cauchy principal integrations and derive all functions necessary in calculating half-widths and shifts. A great advantage resulting from introducing the coordinate representation and choosing the correlation functions as the starting point is that one is able to calculate the half-widths and shifts with high accuracy, no matter how complicated the potential models are and no matter what kind of trajectories are chosen. In any case, the convergence of the calculated results is always guaranteed. As a result, with this new method, one can remove some uncertainties incorporated in the current width and shift studies. As a test, we present calculated Raman Q linewidths for the N2-N2 pair based on several trajectories, including the more accurate "exact" ones. Finally, by using this new method as a benchmark, we have carried out convergence checks for calculated values based on usual methods and have found that some results in the literature are not converged.

  5. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  6. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  7. Optoelectronic imaging of speckle using image processing method

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  8. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  9. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  10. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  11. Computational predictions of stereochemistry in asymmetric thiazolium- and triazolium-catalyzed benzoin condensations.

    PubMed

    Dudding, Travis; Houk, Kendall N

    2004-04-20

    The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6-31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6-31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally.

  12. Fast sparse recovery and coherence factor weighting in optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis

    2017-03-01

    Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.

  13. Calculation of the temporal gravity variation from spatially variable water storage change in soils and aquifers

    NASA Astrophysics Data System (ADS)

    Leirião, Sílvia; He, Xin; Christiansen, Lars; Andersen, Ole B.; Bauer-Gottwein, Peter

    2009-02-01

    SummaryTotal water storage change in the subsurface is a key component of the global, regional and local water balances. It is partly responsible for temporal variations of the earth's gravity field in the micro-Gal (1 μGal = 10 -8 m s -2) range. Measurements of temporal gravity variations can thus be used to determine the water storage change in the hydrological system. A numerical method for the calculation of temporal gravity changes from the output of hydrological models is developed. Gravity changes due to incremental prismatic mass storage in the hydrological model cells are determined to give an accurate 3D gravity effect. The method is implemented in MATLAB and can be used jointly with any hydrological simulation tool. The method is composed of three components: the prism formula, the MacMillan formula and the point-mass approximation. With increasing normalized distance between the storage prism and the measurement location the algorithm switches first from the prism equation to the MacMillan formula and finally to the simple point-mass approximation. The method was used to calculate the gravity signal produced by an aquifer pump test. Results are in excellent agreement with the direct numerical integration of the Theis well solution and the semi-analytical results presented in [Damiata, B.N., and Lee, T.-C., 2006. Simulated gravitational response to hydraulic testing of unconfined aquifers. Journal of Hydrology 318, 348-359]. However, the presented method can be used to forward calculate hydrology-induced temporal variations in gravity from any hydrological model, provided earth curvature effects can be neglected. The method allows for the routine assimilation of ground-based gravity data into hydrological models.

  14. [Fast discrimination of edible vegetable oil based on Raman spectroscopy].

    PubMed

    Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng

    2012-07-01

    A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.

  15. SENS-5D trajectory and wind-sensitivity calculations for unguided rockets

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Huang, L. C. P.; Cook, R. A.

    1975-01-01

    A computational procedure is described which numerically integrates the equations of motion of an unguided rocket. Three translational and two angular (roll discarded) degrees of freedom are integrated through the final burnout; and then, through impact, only three translational motions are considered. Input to the routine is: initial time, altitude and velocity, vehicle characteristics, and other defined options. Input format has a wide range of flexibility for special calculations. Output is geared mainly to the wind-weighting procedure, and includes summary of trajectory at burnout, apogee and impact, summary of spent-stage trajectories, detailed position and vehicle data, unit-wind effects for head, tail and cross winds, coriolis deflections, range derivative, and the sensitivity curves (the so called F(Z) and DF(Z) curves). The numerical integration procedure is a fourth-order, modified Adams-Bashforth Predictor-Corrector method. This method is supplemented by a fourth-order Runge-Kutta method to start the integration at t=0 and whenever error criteria demand a change in step size.

  16. Study on Synergistic Mechanism of Inhibitor Mixture Based on Electron Transfer Behavior

    PubMed Central

    Han, Peng; He, Yang; Chen, Changfeng; Yu, Haobo; Liu, Feng; Yang, Hong; Ma, Yue; Zheng, Yanjun

    2016-01-01

    Mixing is an important method to improve the performance of surfactants due to their synergistic effect. The changes in bonding interaction and adsorption structure of IM and OP molecules before and after co-adsorbed on Fe(001) surface is calculated by DFTB+ method. It is found that mixture enable the inhibitor molecules with higher EHOMO donate more electrons while the inhibitor molecules with lower ELUMO accept more electrons, which strengthens the bonding interaction of both inhibitor agent and inhibitor additive with metal surface. Meanwhile, water molecules in the compact layer of double electric layer are repulsed and the charge transfer resistance during the corrosion process increases. Accordingly, the correlation between the frontier orbital (EHOMO and ELUMO of inhibitor molecules and the Fermi level of metal) and inhibition efficiency is determined. Finally, we propose a frontier orbital matching principle for the synergistic effect of inhibitors, which is verified by electrochemical experiments. This frontier orbital matching principle provides an effective quantum chemistry calculation method for the optimal selection of inhibitor mixture. PMID:27671332

  17. Improved telescope focus using only two focus images

    NASA Astrophysics Data System (ADS)

    Barrick, Gregory; Vermeulen, Tom; Thomas, James

    2008-07-01

    In an effort to reduce the amount of time spent focusing the telescope and to improve the quality of the focus, a new procedure has been investigated and implemented at the Canada-France-Hawaii Telescope (CFHT). The new procedure is based on a paper by Tokovinin and Heathcote and requires only two out-of-focus images to determine the best focus for the telescope. Using only two images provides a great time savings over the five or more images required for a standard through-focus sequence. In addition, it has been found that this method is significantly less sensitive to seeing variations than the traditional through-focus procedure, so the quality of the resulting focus is better. Finally, the new procedure relies on a second moment calculation and so is computationally easier and more robust than methods using a FWHM calculation. The new method has been implemented for WIRCam for the past 18 months, for MegaPrime for the past year, and has recently been implemented for ESPaDOnS.

  18. [Analysis and experimental verification of sensitivity and SNR of laser warning receiver].

    PubMed

    Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue

    2009-01-01

    In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.

  19. A baroclinic quasigeostrophic open ocean model

    NASA Technical Reports Server (NTRS)

    Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.

    1983-01-01

    A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.

  20. Simulation of the Continuous Casting and Cooling Behavior of Metallic Glasses

    PubMed Central

    Pei, Zhipu; Ju, Dongying

    2017-01-01

    The development of melt spinning technique for preparation of metallic glasses was summarized. The limitations as well as restrictions of the melt spinning embodiments were also analyzed. As an improvement and variation of the melt spinning method, the vertical-type twin-roll casting (VTRC) process was discussed. As the thermal history experienced by the casting metals to a great extent determines the qualities of final products, cooling rate in the quenching process is believed to have a significant effect on glass formation. In order to estimate the ability to produce metallic glasses by VTRC method, temperature and flow phenomena of the melt in molten pool were computed, and cooling rates under different casting conditions were calculated with the simulation results. Considering the fluid character during casting process, the material derivative method based on continuum theory was adopted in the cooling rate calculation. Results show that the VTRC process has a good ability in continuous casting metallic glassy ribbons. PMID:28772779

  1. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  2. Calculating Launch Vehicle Flight Performance Reserve

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Pinson, Robin M.; Beard, Bernard B.

    2011-01-01

    This paper addresses different methods for determining the amount of extra propellant (flight performance reserve or FPR) that is necessary to reach orbit with a high probability of success. One approach involves assuming that the various influential parameters are independent and that the result behaves as a Gaussian. Alternatively, probabilistic models may be used to determine the vehicle and environmental models that will be available (estimated) for a launch day go/no go decision. High-fidelity closed-loop Monte Carlo simulation determines the amount of propellant used with each random combination of parameters that are still unknown at the time of launch. Using the results of the Monte Carlo simulation, several methods were used to calculate the FPR. The final chosen solution involves determining distributions for the pertinent outputs and running a separate Monte Carlo simulation to obtain a best estimate of the required FPR. This result differs from the result obtained using the other methods sufficiently that the higher fidelity is warranted.

  3. Simulation of the Continuous Casting and Cooling Behavior of Metallic Glasses.

    PubMed

    Pei, Zhipu; Ju, Dongying

    2017-04-17

    The development of melt spinning technique for preparation of metallic glasses was summarized. The limitations as well as restrictions of the melt spinning embodiments were also analyzed. As an improvement and variation of the melt spinning method, the vertical-type twin-roll casting (VTRC) process was discussed. As the thermal history experienced by the casting metals to a great extent determines the qualities of final products, cooling rate in the quenching process is believed to have a significant effect on glass formation. In order to estimate the ability to produce metallic glasses by VTRC method, temperature and flow phenomena of the melt in molten pool were computed, and cooling rates under different casting conditions were calculated with the simulation results. Considering the fluid character during casting process, the material derivative method based on continuum theory was adopted in the cooling rate calculation. Results show that the VTRC process has a good ability in continuous casting metallic glassy ribbons.

  4. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    NASA Astrophysics Data System (ADS)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  5. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    PubMed

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  6. Solar Tyrol project: using climate data for energy production estimation. The good practice of Tyrol in conceptualizing climate services.

    NASA Astrophysics Data System (ADS)

    Petitta, Marcello; Wagner, Jochen; Costa, Armin; Monsorno, Roberto; Innerebner, Markus; Moser, David; Zebisch, Marc

    2014-05-01

    The scientific community in the last years is largely discussing the concept of "Climate services". Several definitions have been used, but it still remains a rather open concept. We used climate data from analysis and reanalysis to create a daily and hourly model of atmospheric turbidity in order to account the effect of the atmosphere on incoming solar radiation with the final aim of estimating electric production from Photovoltaic (PV) Modules in the Alps. Renewable Energy production in the Alpine Region is dominated by hydroelectricity, but the potential for photovoltaic energy production is gaining momentum. Especially the southern part of the Alps and inner Alpine regions offer good conditions for PV energy production. The combination of high irradiance values and cold air temperature in mountainous regions is well suited for solar cells. To enable more widespread currency of PV plants, PV has to become an important part in regional planning. To provide regional authorities and also private stakeholders with high quality PV energy yield climatology in the provinces of Bolzano/Bozen South Tirol (Italy) and Tyrol (Austria), the research project Solar Tyrol was inaugurated in 2012. Several methods are used to calculate very high resolution maps of solar radiation. Most of these approaches use climatological values. In this project we reconstructed the last 10 years of atmospheric turbidity using reanalysis and operational data in order to better estimate incoming solar radiation in the alpine region. Our method is divided into three steps: i) clear sky radiation: to estimate the atmospheric effect on solar radiation we calculated Linke Turbidity factor using aerosols optical depth (AOD), surface albedo, atmospheric pressure, and total water content from ECMWF and MACC analysis. ii) shadows: we calculated shadows of mountains and buildings using a 2 meter-resolution digital elevation model of the area and GIS module r.sun modified to fit our specific needs. iii) Clouds effect: clear-sky irradiance is modified using cloud index provided by Meteoswiss with very high temporal resolution (15 min within 2004 and 2012). These three steps produce daily (eventually hourly) dataset of incoming solar radiation at 25 m of horizontal resolution for the entire Tyrol region reaching 2 m horizontal resolution for the inhabited areas . The final steps provide the potential electric energy production assuming the presence of two PV technologies: cadmium telluride and polycrystalline silicon. In this case the air temperature data have been used to include the temperture-efficency factor in the PV modules. Results shows an improved accuracy in estimated incoming solar radiation compared to the standard methods used due to clouds and atmospheric turbidity calculation used in our method. Moreover we set a specific method to estimate shadows effects of close and far objects: the problem is in adopting an appropriate horizontal resolution and maintain the calculation time for the entire geographical domain relatively low. Our methods allow estimating the correct horizontal resolution for the area given the digital elevation model of the region. Finally a web-based-GIS interface has been set up to display the data to public and a spatial database has been developed to handle the large amount of data. The current results of our project demonstrate how is possible to use scientific know-how and climate products to provide relevant and simple-to-use information to stake holder and political bodies. Moreover our approach show how is possible to have a relevant impact in current political and economical fields associated to local energy production and planning.

  7. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  8. High-precision terahertz frequency modulated continuous wave imaging method using continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang

    2018-02-01

    Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.

  9. Predict the fatigue life of crack based on extended finite element method and SVR

    NASA Astrophysics Data System (ADS)

    Song, Weizhen; Jiang, Zhansi; Jiang, Hui

    2018-05-01

    Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.

  10. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  11. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  12. Tests with three-dimensional adjustments in the rectangular working section of the French T2 wind tunnel with an AS 07-type swept-back wing model

    NASA Technical Reports Server (NTRS)

    Blanchard, A.; Payry, M. J.; Breil, J. F.

    1986-01-01

    The results obtained on the AS 07 wing and the working section walls for three types of configurations are reported. The first, called non-adapted, corresponds to the divergent upper and lower rectilinear walls which compensate for limit layer thickening. It can serve as a basis for complete flow calculations. The second configuration corresponds to wall shapes determined from calculations which tend to minimize interference at the level of the fuselage. Finally, the third configuration, called two-dimensional adaptation, uses the standard method for T2 profile tests. This case was tested to determine the influence of wall shape and error magnitude. These results are not sufficient to validate the three-dimensional adaptation; they must be coordinated with calculations or with unlimited atmosphere tests.

  13. Isovector and flavor-diagonal charges of the nucleon

    NASA Astrophysics Data System (ADS)

    Gupta, Rajan; Bhattacharya, Tanmoy; Jang, Yong-Chull; Lin, Huey-Wen; Yoon, Boram

    2018-03-01

    We present an update on the status of the calculations of isovector and flavor-diagonal charges of the nucleon. The calculations of the isovector charges are being done using ten 2+1+1-flavor HISQ ensembles generated by the MILC collaboration covering the range of lattice spacings a ≈ 0.12, 0.09, 0.06 fm and pion masses Mπ ≈ 310, 220, 130 MeV. Excited-states contamination is controlled by using four-state fits to two-point correlators and three-states fits to the three-point correlators. The calculations of the disconnected diagrams needed to estimate flavor-diagonal charges are being done on a subset of six ensembles using the stocastic method. Final results are obtained using a simultaneous fit in M2π, the lattice spacing a and the finite volume parameter MπL keeping only the leading order corrections.

  14. Optical model calculations of heavy-ion target fragmentation

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.

    1986-01-01

    The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.

  15. The NNLO QCD soft function for 1-jettiness

    NASA Astrophysics Data System (ADS)

    Campbell, John M.; Ellis, R. Keith; Mondini, Roberto; Williams, Ciaran

    2018-03-01

    We calculate the soft function for the global event variable 1-jettiness at next-to-next-to-leading order (NNLO) in QCD. We focus specifically on the non-Abelian contribution, which, unlike the Abelian part, is not determined by the next-to-leading order result. The calculation uses the known general forms for the emission of one and two soft partons and is performed using a sector-decomposition method that is spelled out in detail. Results are presented in the form of numerical fits to the 1-jettiness soft function for LHC kinematics (as a function of the angle between the incoming beams and the final-state jet) and for generic kinematics (as a function of three independent angles). These fits represent one of the needed ingredients for NNLO calculations that use the N-jettiness event variable to handle infrared singularities.

  16. NMR shifts for polycyclic aromatic hydrocarbons from first-principles

    NASA Astrophysics Data System (ADS)

    Thonhauser, T.; Ceresoli, Davide; Marzari, Nicola

    We present first-principles, density-functional theory calculations of the NMR chemical shifts for polycyclic aromatic hydrocarbons, starting with benzene and increasing sizes up to the one- and two-dimensional infinite limits of graphene ribbons and sheets. Our calculations are performed using a combination of the recently developed theory of orbital magnetization in solids, and a novel approach to NMR calculations where chemical shifts are obtained from the derivative of the orbital magnetization with respect to a microscopic, localized magnetic dipole. Using these methods we study on equal footing the 1H and 13 shifts in benzene, pyrene, coronene, in naphthalene, anthracene, naphthacene, and pentacene, and finally in graphene, graphite, and an infinite graphene ribbon. Our results show very good agreement with experiments and allow us to characterize the trends for the chemical shifts as a function of system size.

  17. Transfer reaction code with nonlocal interactions

    DOE PAGES

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N)(d,N) or (N,d)(N,d), including nonlocal nucleon–target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d,N)BA(d,N)B or B(N,d)AB(N,d)A. Details on the implementation of the TT-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided.more » This code is suitable to be applied for deuteron induced reactions in the range of View the MathML sourceEd=10–70MeV, and provides cross sections with 4% accuracy.« less

  18. Identification and location of catenary insulator in complex background based on machine vision

    NASA Astrophysics Data System (ADS)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  19. Measurement of hydraulic conductivity of unsaturated soils with thermocouple psychometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, D.E.

    1982-11-01

    A method of measuring the hydraulic conductivity of unsaturated soil using the instantaneous profile method with psychometric probes to measure water potential is developed and described. Soil is compacted into cylindrical tubes, and the tubes are sealed and instrumented with thermocouple psychrometers. The soil is moistened or dried from one end of the tube. Psychrometers are read periodically. Hydraulic conductivity is computed from the psychrometer readings and the appropriate moisture characteristic curve for the soil and then plotted as a function of water potential, water content, or degree of saturation. Hydraulic conductivities of six soils were measured at water potentialsmore » as low as -80 bar. The measured hydraulic conductivities and moisture characteristic curves were used along with the known boundary flux in a computer program to calculate the final water content profiles. Computed and measured final water content profiles agreed tolerably well.« less

  20. Total decay and transition rates from LQCD

    NASA Astrophysics Data System (ADS)

    Hansen, Maxwell T.; Meyer, Harvey B.; Robaina, Daniel

    2018-03-01

    We present a new technique for extracting total transition rates into final states with any number of hadrons from lattice QCD. The method involves constructing a finite-volume Euclidean four-point function whose corresponding infinite-volume spectral function gives access to the decay and transition rates into all allowed final states. The inverse problem of calculating the spectral function is solved via the Backus-Gilbert method, which automatically includes a smoothing procedure. This smoothing is in fact required so that an infinite-volume limit of the spectral function exists. Using a numerical toy example we find that reasonable precision can be achieved with realistic lattice data. In addition, we discuss possible extensions of our approach and, as an example application, prospects for applying the formalism to study the onset of deep-inelastic scattering. More details are given in the published version of this work, Ref. [1].

  1. An empirical model to determine the hadronic resonance contributions \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- to transitions

    NASA Astrophysics Data System (ADS)

    Blake, T.; Egede, U.; Owen, P.; Petridis, K. A.; Pomery, G.

    2018-06-01

    A method for analysing the hadronic resonance contributions in \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- decays is presented. This method uses an empirical model that relies on measurements of the branching fractions and polarisation amplitudes of final states involving J^{PC}=1^{-} resonances, relative to the short-distance component, across the full dimuon mass spectrum of \\overline{B}{} ^0 → \\overline{K}{} ^{*0} μ ^+ μ ^- transitions. The model is in good agreement with existing calculations of hadronic non-local effects. The effect of this contribution to the angular observables is presented and it is demonstrated how the narrow resonances in the q^2 spectrum provide a dramatic enhancement to CP-violating effects in the short-distance amplitude. Finally, a study of the hadronic resonance effects on lepton universality ratios, R_{K^{(*)}}, in the presence of new physics is presented.

  2. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  3. Charge transfer in low-energy collisions of H with He+ and H+ with He in excited states

    NASA Astrophysics Data System (ADS)

    Loreau, J.; Ryabchenko, S.; Muñoz Burgos, J. M.; Vaeck, N.

    2018-04-01

    The charge transfer process in collisions of excited (n = 2, 3) hydrogen atoms with He+ and in collisions of excited helium atoms with H+ is studied theoretically. A combination of a fully quantum-mechanical method and a semi-classical approach is employed to calculate the charge-exchange cross sections at collision energies from 0.1 eV u‑1 up to 1 keV u‑1. These methods are based on accurate ab initio potential energy curves and non-adiabatic couplings for the molecular ion HeH+. Charge transfer can occur either in singlet or in triplet states, and the differences between the singlet and triplet spin manifolds are discussed. The dependence of the cross section on the quantum numbers n and l of the initial state is demonstrated. The isotope effect on the charge transfer cross sections, arising at low collision energy when H is substituted by D or T, is investigated. Rate coefficients are calculated for all isotopes up to 106 K. Finally, the impact of the present calculations on models of laboratory plasmas is discussed.

  4. New Method of Calculating a Multiplication by using the Generalized Bernstein-Vazirani Algorithm

    NASA Astrophysics Data System (ADS)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed

    2018-06-01

    We present a new method of more speedily calculating a multiplication by using the generalized Bernstein-Vazirani algorithm and many parallel quantum systems. Given the set of real values a1,a2,a3,\\ldots ,aN and a function g:bf {R}→ {0,1}, we shall determine the following values g(a1),g(a2),g(a3),\\ldots , g(aN) simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Next, we consider it as a number in binary representation; M 1 = ( g( a 1), g( a 2), g( a 3),…, g( a N )). By using M parallel quantum systems, we have M numbers in binary representation, simultaneously. The speed of obtaining the M numbers is shown to outperform the classical case by a factor of M. Finally, we calculate the product; M1× M2× \\cdots × MM. The speed of obtaining the product is shown to outperform the classical case by a factor of N × M.

  5. Prediction of soft soil foundation settlement in Guangxi granite area based on fuzzy neural network model

    NASA Astrophysics Data System (ADS)

    Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun

    2018-01-01

    At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.

  6. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  7. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  8. Reduction of the allotropic transition temperature in nanocrystalline zirconium: Predicted by modified equation of state (MEOS) method and molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Salati, Amin; Mokhtari, Esmail; Panjepour, Masoud; Aryanpour, Gholamreza

    2013-04-01

    The temperature at which polymorphic phase transformation occurs in nanocrystalline (NC) materials is different from that of coarse-grained specimens. This anomaly has been related to the role of grain boundary component in these materials and can be predicted by a dilated crystal model. In this study, based on this model, a modified equation of state (MEOS) method (instead of equation of state, EOS, method) is used to calculate the total Gibbs free energy of each phase (β-Zr or α-Zr) in NC Zr. Thereupon, the change in the total Gibbs free energy for β-Zr to α-Zr phase transformation (ΔGβ→α) via the grain size is calculated by this method. Similar to polymorphic transformation in other NC materials (Fe, Nb, Co, TiO2, Al2O3 and ZnS), it is found that the estimated transformation temperature in NC Zr (β→α) is reduced with decreasing grain size. Finally, a molecular dynamics (MD) simulation is employed to confirm the theoretical results.

  9. Improved numerical methods for infinite spin chains with long-range interactions

    NASA Astrophysics Data System (ADS)

    Nebendahl, V.; Dür, W.

    2013-02-01

    We present several improvements of the infinite matrix product state (iMPS) algorithm for finding ground states of one-dimensional quantum systems with long-range interactions. As a main ingredient, we introduce the superposed multioptimization method, which allows an efficient optimization of exponentially many MPS of different lengths at different sites all in one step. Here, the algorithm becomes protected against position-dependent effects as caused by spontaneously broken translational invariance. So far, these have been a major obstacle to convergence for the iMPS algorithm if no prior knowledge of the system's translational symmetry was accessible. Further, we investigate some more general methods to speed up calculations and improve convergence, which might be partially interesting in a much broader context, too. As a more special problem, we also look into translational invariant states close to an invariance-breaking phase transition and show how to avoid convergence into wrong local minima for such systems. Finally, we apply these methods to polar bosons with long-range interactions. We calculate several detailed Devil's staircases with the corresponding phase diagrams and investigate some supersolid properties.

  10. Solving three-body-breakup problems with outgoing-flux asymptotic conditions

    NASA Astrophysics Data System (ADS)

    Randazzo, J. M.; Buezas, F.; Frapiccini, A. L.; Colavecchia, F. D.; Gasaneo, G.

    2011-11-01

    An analytically solvable three-body collision system (s wave) model is used to test two different theoretical methods. The first one is a configuration interaction expansion of the scattering wave function using a basis set of Generalized Sturmian Functions (GSF) with purely outgoing flux (CISF), introduced recently in A. L. Frapicinni, J. M. Randazzo, G. Gasaneo, and F. D. Colavecchia [J. Phys. B: At. Mol. Opt. Phys.JPAPEH0953-407510.1088/0953-4075/43/10/101001 43, 101001 (2010)]. The second one is a finite element method (FEM) calculation performed with a commercial code. Both methods are employed to analyze different ways of modeling the asymptotic behavior of the wave function in finite computational domains. The asymptotes can be simulated very accurately by choosing hyperspherical or rectangular contours with the FEM software. In contrast, the CISF method can be defined both in an infinite domain or within a confined region in space. We found that the hyperspherical (rectangular) FEM calculation and the infinite domain (confined) CISF evaluation are equivalent. Finally, we apply these models to the Temkin-Poet approach of hydrogen ionization.

  11. The effective local potential method: Implementation for molecules and relation to approximate optimized effective potential techniques

    NASA Astrophysics Data System (ADS)

    Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric

    2007-02-01

    We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.

  12. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  13. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  14. Atomistic properties of γ uranium.

    PubMed

    Beeler, Benjamin; Deo, Chaitanya; Baskes, Michael; Okuniewski, Maria

    2012-02-22

    The properties of the body-centered cubic γ phase of uranium (U) are calculated using atomistic simulations. First, a modified embedded-atom method interatomic potential is developed for the high temperature body-centered cubic (γ) phase of U. This phase is stable only at high temperatures and is thus relatively inaccessible to first principles calculations and room temperature experiments. Using this potential, equilibrium volume and elastic constants are calculated at 0 K and found to be in close agreement with previous first principles calculations. Further, the melting point, heat capacity, enthalpy of fusion, thermal expansion and volume change upon melting are calculated and found to be in reasonable agreement with experiment. The low temperature mechanical instability of γ U is correctly predicted and investigated as a function of pressure. The mechanical instability is suppressed at pressures greater than 17.2 GPa. The vacancy formation energy is analyzed as a function of pressure and shows a linear trend, allowing for the calculation of the extrapolated zero pressure vacancy formation energy. Finally, the self-defect formation energy is analyzed as a function of temperature. This is the first atomistic calculation of γ U properties above 0 K with interatomic potentials.

  15. Atomistic properties of γ uranium

    NASA Astrophysics Data System (ADS)

    Beeler, Benjamin; Deo, Chaitanya; Baskes, Michael; Okuniewski, Maria

    2012-02-01

    The properties of the body-centered cubic γ phase of uranium (U) are calculated using atomistic simulations. First, a modified embedded-atom method interatomic potential is developed for the high temperature body-centered cubic (γ) phase of U. This phase is stable only at high temperatures and is thus relatively inaccessible to first principles calculations and room temperature experiments. Using this potential, equilibrium volume and elastic constants are calculated at 0 K and found to be in close agreement with previous first principles calculations. Further, the melting point, heat capacity, enthalpy of fusion, thermal expansion and volume change upon melting are calculated and found to be in reasonable agreement with experiment. The low temperature mechanical instability of γ U is correctly predicted and investigated as a function of pressure. The mechanical instability is suppressed at pressures greater than 17.2 GPa. The vacancy formation energy is analyzed as a function of pressure and shows a linear trend, allowing for the calculation of the extrapolated zero pressure vacancy formation energy. Finally, the self-defect formation energy is analyzed as a function of temperature. This is the first atomistic calculation of γ U properties above 0 K with interatomic potentials.

  16. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    PubMed

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.

  17. A practical guide to value of information analysis.

    PubMed

    Wilson, Edward C F

    2015-02-01

    Value of information analysis is a quantitative method to estimate the return on investment in proposed research projects. It can be used in a number of ways. Funders of research may find it useful to rank projects in terms of the expected return on investment from a variety of competing projects. Alternatively, trialists can use the principles to identify the efficient sample size of a proposed study as an alternative to traditional power calculations, and finally, a value of information analysis can be conducted alongside an economic evaluation as a quantitative adjunct to the 'future research' or 'next steps' section of a study write up. The purpose of this paper is to present a brief introduction to the methods, a step-by-step guide to calculation and a discussion of issues that arise in their application to healthcare decision making. Worked examples are provided in the accompanying online appendices as Microsoft Excel spreadsheets.

  18. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.

  19. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.

  20. Sensorless control of ship propulsion interior permanent magnet synchronous motor based on a new sliding mode observer.

    PubMed

    Ren, Jun-Jie; Liu, Yan-Cheng; Wang, Ning; Liu, Si-Yuan

    2015-01-01

    This paper proposes a sensorless speed control strategy for ship propulsion interior permanent magnet synchronous motor (IPMSM) based on a new sliding-mode observer (SMO). In the SMO the low-pass filter and the method of arc-tangent calculation of extended electromotive force (EMF) or phase-locked loop (PLL) technique are not used. The calculation of the rotor speed is deduced from the Lyapunov function stability analysis. In order to reduce system chattering, sigmoid functions with switching gains being adaptively updated by fuzzy logic systems are innovatively incorporated into the SMO. Finally, simulation results for a 4.088 MW ship propulsion IPMSM and experimental results from a 7.5 kW IPMSM drive are provided to verify the effectiveness of the proposed SMO method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Consistent transport coefficients in astrophysics

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Rovira, M.; Ferrofontan, C.

    1986-01-01

    A consistent theory for dealing with transport phenomena in stellar atmospheres starting with the kinetic equations and introducing three cases (LTE, partial LTE, and non-LTE) was developed. The consistent hydrodynamical equations were presented for partial-LTE, the transport coefficients defined, and a method shown to calculate them. The method is based on the numerical solution of kinetic equations considering Landau, Boltzmann, and Focker-Planck collision terms. Finally a set of results for the transport coefficients derived for a partially ionized hydrogen gas with radiation was shown, considering ionization and recombination as well as elastic collisions. The results obtained imply major changes is some types of theoretical model calculations and can resolve some important current problems concerning energy and mass balance in the solar atmosphere. It is shown that energy balance in the lower solar transition region can be fully explained by means of radiation losses and conductive flux.

  2. Tire-road friction estimation and traction control strategy for motorized electric vehicle.

    PubMed

    Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).

  3. The development and application of CFD technology in mechanical engineering

    NASA Astrophysics Data System (ADS)

    Wei, Yufeng

    2017-12-01

    Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.

  4. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  5. Correlation functions in first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Garrido, V.; Crespo, D.

    1997-09-01

    Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.

  6. Fuel feasibility study for Red River Army Depot boiler plant. Final report. [Economic breakeven points for conversion to fossil fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ables, L.D.

    This paper establishes economic breakeven points for the conversion to various fossil fuels as a function of time and pollution constraints for the main boiler plant at Red River Army Depot in Texarkana, Texas. In carrying out the objectives of this paper, the author develops what he considers to be the basic conversion costs and operating costs for each fossil fuel under investigation. These costs are analyzed by the use of the present worth comparison method, and the minimum cost difference between the present fuel and the proposed fuel which would justify the conversion to the proposed fuel is calculated.more » These calculated breakeven points allow a fast and easy method of determining the feasibility of a fuel by merely knowing the relative price difference between the fuels under consideration. (GRA)« less

  7. Tire-road friction estimation and traction control strategy for motorized electric vehicle

    PubMed Central

    Jin, Li-Qiang; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053

  8. Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost

    NASA Astrophysics Data System (ADS)

    Shen, A. W.; Guo, J. L.; Wang, Z. J.

    2015-12-01

    In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.

  9. Computational investigations of the band structure, and thermodynamic and optical features of thorium-based oxide ThGeO4 using the full-potential linearized augmented plane-wave plus local orbital approach

    NASA Astrophysics Data System (ADS)

    Chiker, F.; Khachai, H.; Mathieu, C.; Bin-Omran, S.; Kada, Belkacem; Sun, Xiao-Wei; Sandeep; Rai, D. P.; Khenata, R.

    2018-05-01

    In this study, first-principles investigations were performed using the full-potential linearized augmented plane-wave method of the structural and optoelectronic properties of thorium germinate (ThGeO4), a high-K dielectric material. Under ambient conditions, the structural properties calculated for ThGeO4 in the zircon phase were in excellent agreement with the available experimental data. Furthermore, using the modified Becke -Johnson correction method, the calculated band gaps and optical constants accurately described this compound. Finally, the thermal properties were predicted over a temperature range of 0-700 K and pressures up to 11 GPa using the quasi-harmonic Debye model, where the variations in the heat capacity, primitive cell volume, and thermal expansion coefficients were determined successfully.

  10. Testing electronic structure methods for describing intermolecular H...H interactions in supramolecular chemistry.

    PubMed

    Casadesús, Ricard; Moreno, Miquel; González-Lafont, Angels; Lluch, José M; Repasky, Matthew P

    2004-01-15

    In this article a wide variety of computational approaches (molecular mechanics force fields, semiempirical formalisms, and hybrid methods, namely ONIOM calculations) have been used to calculate the energy and geometry of the supramolecular system 2-(2'-hydroxyphenyl)-4-methyloxazole (HPMO) encapsulated in beta-cyclodextrin (beta-CD). The main objective of the present study has been to examine the performance of these computational methods when describing the short range H. H intermolecular interactions between guest (HPMO) and host (beta-CD) molecules. The analyzed molecular mechanics methods do not provide unphysical short H...H contacts, but it is obvious that their applicability to the study of supramolecular systems is rather limited. For the semiempirical methods, MNDO is found to generate more reliable geometries than AM1, PM3 and the two recently developed schemes PDDG/MNDO and PDDG/PM3. MNDO results only give one slightly short H...H distance, whereas the NDDO formalisms with modifications of the Core Repulsion Function (CRF) via Gaussians exhibit a large number of short to very short and unphysical H...H intermolecular distances. In contrast, the PM5 method, which is the successor to PM3, gives very promising results. Our ONIOM calculations indicate that the unphysical optimized geometries from PM3 are retained when this semiempirical method is used as the low level layer in a QM:QM formulation. On the other hand, ab initio methods involving good enough basis sets, at least for the high level layer in a hybrid ONIOM calculation, behave well, but they may be too expensive in practice for most supramolecular chemistry applications. Finally, the performance of the evaluated computational methods has also been tested by evaluating the energetic difference between the two most stable conformations of the host(beta-CD)-guest(HPMO) system. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 25: 99-105, 2004

  11. [Selection of a SHF-plasma device for carbon dioxide and hydrogen recycling in a physical-chemical life support system].

    PubMed

    Klimarev, S I

    2003-01-01

    A waveguide SHF plasmotron was chosen for carbon dioxide and hydrogen recycling in a low-temperature plasma in the Bosch reactor. To increase electric intensity within the discharge capacitor, thickness of the waveguide thin wall was changed for 10 mm. A method for calculating the compensated exponential smooth transition to align two similar lines (waveguides) with sections of 72 x 34 mm and 72 x 10 mm to transfer SHF energies from the generator to plasma was proposed. Calculation of the smooth transition has been used in final refinement of the HSF plasmotron design as a component of a physical-chemical LSS.

  12. The Limiting Velocity in Falling from a Great Height

    NASA Technical Reports Server (NTRS)

    Wilson, Edwin Bidwell

    1919-01-01

    The purpose of this report is to give a simple treatment of the problem of calculating the final or limiting velocity of an object falling in vertical motion under gravity in a resisting medium. The equations of motion are easily set up and integrated when the density of the medium is constant and the resistance varies as the square of the velocity. The results show that the fundamental characteristics of the vertical motion under gravity in a resisting medium is the approach to a terminal or limiting velocity, whether the initial downward velocity is less or greater than the limiting velocity. This method can be used to calculate the terminal velocity of a bomb trajectory.

  13. Research on performance requirements of turbofan engine used on carrier-based UAV

    NASA Astrophysics Data System (ADS)

    Zhao, Shufan; Li, Benwei; Zhang, Wenlong; Wu, Heng; Feng, Tang

    2017-05-01

    According to the mission requirements of the carrier-based unmanned aerial vehicle (UAV), a mode level flight was established to calculate the thrust requirements from altitude 9 km to 13 km. Then, the estimation method of flight profile was used to calculate the weight of UAV in each stage to get the specific fuel consumption requirements of the UAV in standby stage. The turbofan engine of carrier-based UAV should meet the thrust and specific fuel consumption requirements. Finally, the GSP software was used to verify the simulation of a small high-bypass turbofan engine. The conclusion is useful for the turbofan engine selection of carrier-based UAV.

  14. Dangling bond defects in SiC: An ab initio study

    NASA Astrophysics Data System (ADS)

    Tuttle, Blair R.

    2018-01-01

    We report first-principles microscopic calculations of the properties of defects with dangling bonds in crystalline 3 C -SiC. Specifically, we focus on hydrogenated Si and C vacancies, divacancies, and multivacancies. The latter is a generic model for an isolated dangling bond within a bulk SiC matrix. Hydrogen serves to passivate electrically active defects to allow the isolation of a single dangling-bond defect. We used hybrid density-functional methods to determine energetics and electrical activity. The present results are compared to previous 3 C -SiC calculations and experiments. Finally, we identify homopolar carbon dangling-bond defects as the leakage causing defects in nanoporous SiC alloys.

  15. SAR image registration based on Susan algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun-bo; Fu, Shao-hua; Wei, Zhong-yi

    2011-10-01

    Synthetic Aperture Radar (SAR) is an active remote sensing system which can be installed on aircraft, satellite and other carriers with the advantages of all day and night and all-weather ability. It is the important problem that how to deal with SAR and extract information reasonably and efficiently. Particularly SAR image geometric correction is the bottleneck to impede the application of SAR. In this paper we introduces image registration and the Susan algorithm knowledge firstly, then introduces the process of SAR image registration based on Susan algorithm and finally presents experimental results of SAR image registration. The Experiment shows that this method is effective and applicable, no matter from calculating the time or from the calculation accuracy.

  16. Molecular structure, vibrational spectroscopic (FT-IR, FT-Raman), UV-vis spectra, first order hyperpolarizability, NBO analysis, HOMO and LUMO analysis, thermodynamic properties of benzophenone 2,4-dicarboxylic acid by ab initio HF and density functional method

    NASA Astrophysics Data System (ADS)

    Chaitanya, K.

    2012-02-01

    The FT-IR (4000-450 cm -1) and FT-Raman spectra (3500-100 cm -1) of benzophenone 2,4-dicarboxylic acid (2,4-BDA) have been recorded in the condensed state. Density functional theory calculation with B3LYP/6-31G(d,p) basis set have been used to determine ground state molecular geometries (bond lengths and bond angles), harmonic vibrational frequencies, infrared intensities, Raman activities and bonding features of the title compounds. The assignments of the vibrational spectra have been carried out with the help of normal co-ordinate analysis (NCA) following the scaled quantum mechanical force field (SQMFF) methodology. The first order hyperpolarizability ( β0) and related properties ( β, α0 and Δ α) of 2,4-BDA is calculated using HF/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO analysis. The calculated first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charges is also calculated. Because of vibrational analyses, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-vis spectra and electronic absorption properties were explained and illustrated from the frontier molecular orbitals.

  17. Calculation of Physicochemical Properties for Short- and Medium-Chain Chlorinated Paraffins

    NASA Astrophysics Data System (ADS)

    Glüge, Juliane; Bogdal, Christian; Scheringer, Martin; Buser, Andreas M.; Hungerbühler, Konrad

    2013-06-01

    Short- and medium-chain chlorinated paraffins are potential PBT chemicals (persistent, bioaccumulative, toxic) and short-chain chlorinated paraffins are under review for inclusion in the UNEP Stockholm Convention on Persistent Organic Pollutants. Despite their high production volume of more than one million metric tonnes per year, only few data on their physicochemical properties are available. We calculated subcooled-liquid vapor pressure, subcooled-liquid solubility in water and octanol, Henry's law constant for water and octanol, as well as the octanol-water partition coefficient with the property calculation methods COSMOtherm, SPARC, and EPI Suite™, and compared the results to experimental data from the literature. For all properties, good or very good agreement between calculated and measured data was obtained for COSMOtherm; results from SPARC were in good agreement with the measured data except for subcooled-liquid water solubility, whereas EPI Suite™ showed the largest discrepancies for all properties. After critical evaluation of the three property calculation methods, a final set of recommended property data for short- and medium-chain chlorinated paraffins was derived. The calculated property data show interesting relationships with chlorine content and carbon chain length. Increasing chlorine content does not cause pronounced changes in water solubility and octanol-water partition coefficient (KOW) as long as it is below 55%. Increasing carbon chain length leads to strong increases in KOW and corresponding decreases in subcooled-liquid water solubility. The present data set can be used in further studies to assess the environmental fate and human exposure of this relevant compound class.

  18. Accuracy of embedded fragment calculation for evaluating electron interactions in mixed valence magnetic systems: study of 2e-reduced lindqvist polyoxometalates.

    PubMed

    Suaud, Nicolas; López, Xavier; Ben Amor, Nadia; Bandeira, Nuno A G; de Graaf, Coen; Poblet, Josep M

    2015-02-10

    Accurate quantum chemical calculations on real-world magnetic systems are challenging, the inclusion of electron correlation being the bottleneck of such task. One method proposed to overcome this difficulty is the embedded fragment approach. It tackles a chemical problem by dividing it into small fragments, which are treated in a highly accurate way, surrounded by an embedding included at an approximate level. For the vast family of medium-to-large sized polyoxometalates, two-electron-reduced systems are habitual and their magnetic properties are interesting. In this paper, we aim at assessing the quality of embedded fragment calculations by checking their ability to reproduce the electronic spectra of a complete system, here the mixed-metal series [MoxW6-xO19](4-) (x = 0-6). The microscopic parameters extracted from fragment calculations (electron hopping, intersite electrostatic repulsion, local orbital energy, etc.) have been used to reproduce the spectra through model Hamiltonian calculations. These energies are compared to the results of the highly accurate ab initio difference dedicated configuration interaction (DDCI) method on the complete system. In general, the model Hamiltonian calculations using parameters extracted from embedded fragments nearly exactly reproduce the DDCI spectra. This is quite an important result since it can be generalized to any inorganic magnetic system. Finally, the occurrence of singlet or triplet ground states in the series of molecules studied is rationalized upon the interplay of the parameters extracted.

  19. Measuring the volume of brain tumour and determining its location in T2-weighted MRI images using hidden Markov random field: expectation maximization algorithm

    NASA Astrophysics Data System (ADS)

    Mat Jafri, Mohd. Zubir; Abdulbaqi, Hayder Saad; Mutter, Kussay N.; Mustapha, Iskandar Shahrim; Omar, Ahmad Fairuz

    2017-06-01

    A brain tumour is an abnormal growth of tissue in the brain. Most tumour volume measurement processes are carried out manually by the radiographer and radiologist without relying on any auto program. This manual method is a timeconsuming task and may give inaccurate results. Treatment, diagnosis, signs and symptoms of the brain tumours mainly depend on the tumour volume and its location. In this paper, an approach is proposed to improve volume measurement of brain tumors as well as using a new method to determine the brain tumour location. The current study presents a hybrid method that includes two methods. One method is hidden Markov random field - expectation maximization (HMRFEM), which employs a positive initial classification of the image. The other method employs the threshold, which enables the final segmentation. In this method, the tumour volume is calculated using voxel dimension measurements. The brain tumour location was determined accurately in T2- weighted MRI image using a new algorithm. According to the results, this process was proven to be more useful compared to the manual method. Thus, it provides the possibility of calculating the volume and determining location of a brain tumour.

  20. Molcas 8: New capabilities for multiconfigurational quantum chemical calculations across the periodic table.

    PubMed

    Aquilante, Francesco; Autschbach, Jochen; Carlson, Rebecca K; Chibotaru, Liviu F; Delcey, Mickaël G; De Vico, Luca; Fdez Galván, Ignacio; Ferré, Nicolas; Frutos, Luis Manuel; Gagliardi, Laura; Garavelli, Marco; Giussani, Angelo; Hoyer, Chad E; Li Manni, Giovanni; Lischka, Hans; Ma, Dongxia; Malmqvist, Per Åke; Müller, Thomas; Nenov, Artur; Olivucci, Massimo; Pedersen, Thomas Bondo; Peng, Daoling; Plasser, Felix; Pritchard, Ben; Reiher, Markus; Rivalta, Ivan; Schapiro, Igor; Segarra-Martí, Javier; Stenrup, Michael; Truhlar, Donald G; Ungur, Liviu; Valentini, Alessio; Vancoillie, Steven; Veryazov, Valera; Vysotskiy, Victor P; Weingart, Oliver; Zapata, Felipe; Lindh, Roland

    2016-02-15

    In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization. © 2015 Wiley Periodicals, Inc.

  1. Automatic lumbar spine measurement in CT images

    NASA Astrophysics Data System (ADS)

    Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun

    2017-03-01

    Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.

  2. Navigation in GPS Denied Environments: Feature-Aided Inertial Systems

    DTIC Science & Technology

    2010-03-01

    21] Brown , R. G. and Hwang , P. Y. C., Introduction to Random Signals and Applied Kalman Filtering, Third Edition, John Wiley & Sons, Inc., New York...knowledge is provided in [17]. An online (extended Kalman filter-based) method for calculating a trajectory by tracking features at an unknown location on...Finally, the trajectory error is estimated using these associated features in a Kalman estimator. The next couple of paragraphs will explain these

  3. Methods for calculating the lift force of a flown-around curved profile

    NASA Astrophysics Data System (ADS)

    Uher, Jan

    2017-09-01

    This article explains fundamental origins of the lift force on a curved profile located in the flow. There is a discussion about the most popular, yet misleading explanation of the lift force. For evaluation of the lift force several approaches are applied such as change in momentum, Euler n-equation and more advanced CFD computation. Finally, there is summary of knowledge which is applicable in a turbine blade design.

  4. Extracting survival parameters from isothermal, isobaric, and "iso-concentration" inactivation experiments by the "3 end points method".

    PubMed

    Corradini, M G; Normand, M D; Newcomer, C; Schaffner, D W; Peleg, M

    2009-01-01

    Theoretically, if an organism's resistance can be characterized by 3 survival parameters, they can be found by solving 3 simultaneous equations that relate the final survival ratio to the lethal agent's intensity. (For 2 resistance parameters, 2 equations will suffice.) In practice, the inevitable experimental scatter would distort the results of such a calculation or render the method unworkable. Averaging the results obtained with more than 3 final survival ratio triplet combinations, determined in four or more treatments, can remove this impediment. This can be confirmed by the ability of a kinetic inactivation model derived from the averaged parameters to predict survival patterns under conditions not employed in their determination, as demonstrated with published isothermal survival data of Clostridium botulinum spores, isobaric data of Escherichia coli under HPP, and Pseudomonas exposed to hydrogen peroxide. Both the method and the underlying assumption that the inactivation followed a Weibull-Log logistic (WeLL) kinetics were confirmed in this way, indicating that when an appropriate survival model is available, it is possible to predict the entire inactivation curves from several experimental final survival ratios alone. Where applicable, the method could simplify the experimental procedure and lower the cost of microbial resistance determinations. In principle, the methodology can be extended to deteriorative chemical reactions if they too can be characterized by 2 or 3 kinetic parameters.

  5. A practical approach to calculate the time evolutions of magnetic field effects on photochemical reactions in nano-structured materials.

    PubMed

    Yago, Tomoaki; Wakasa, Masanobu

    2015-04-21

    A practical method to calculate time evolutions of magnetic field effects (MFEs) on photochemical reactions involving radical pairs is developed on the basis of the theory of the chemically induced dynamic spin polarization proposed by Pedersen and Freed. In theory, the stochastic Liouville equation (SLE), including the spin Hamiltonian, diffusion motions of the radical pair, chemical reactions, and spin relaxations, is solved by using the Laplace and the inverse Laplace transformation technique. In our practical approach, time evolutions of the MFEs are successfully calculated by applying the Miller-Guy method instead of the final value theorem to the inverse Laplace transformation process. Especially, the SLE calculations are completed in a short time when the radical pair dynamics can be described by the chemical kinetics consisting of diffusions, reactions and spin relaxations. The SLE analysis with a short calculation time enables one to examine the various parameter sets for fitting the experimental date. Our study demonstrates that simultaneous fitting of the time evolution of the MFE and of the magnetic field dependence of the MFE provides valuable information on the diffusion motions of the radical pairs in nano-structured materials such as micelles where the lifetimes of radical pairs are longer than hundreds of nano-seconds and the magnetic field dependence of the spin relaxations play a major role for the generation of the MFE.

  6. Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques

    NASA Astrophysics Data System (ADS)

    Canbakan, Axel

    Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.

  7. Total protein measurement in canine cerebrospinal fluid: agreement between a turbidimetric assay and 2 dye-binding methods and determination of reference intervals using an indirect a posteriori method.

    PubMed

    Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H

    2014-03-01

    In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  8. On the Hosoya index of a family of deterministic recursive trees

    NASA Astrophysics Data System (ADS)

    Chen, Xufeng; Zhang, Jingyuan; Sun, Weigang

    2017-01-01

    In this paper, we calculate the Hosoya index in a family of deterministic recursive trees with a special feature that includes new nodes which are connected to existing nodes with a certain rule. We then obtain a recursive solution of the Hosoya index based on the operations of a determinant. The computational complexity of our proposed algorithm is O(log2 n) with n being the network size, which is lower than that of the existing numerical methods. Finally, we give a weighted tree shrinking method as a graphical interpretation of the recurrence formula for the Hosoya index.

  9. Landslide risk assessment

    USGS Publications Warehouse

    Lessing, P.; Messina, C.P.; Fonner, R.F.

    1983-01-01

    Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.

  10. Model mismatch analysis and compensation for modal phase measuring deflectometry

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2017-01-11

    The correspondence residuals due to the discrepancy between the reality and the shape model in use are analyzed for the modal phase measuring deflectometry. Slope residuals are calculated from these discrepancies between the modal estimation and practical acquisition. Since the shape mismatch mainly occurs locally, zonal integration methods which are good at dealing with local variations are used to reconstruct the height residual for compensation. Finally, results of both simulation and experiment indicate the proposed height compensation method is effective, which can be used as a post-complement for the modal phase measuring deflectometry.

  11. Comparison of Analysis, Simulation, and Measurement of Wire-to-Wire Crosstalk. Part 2

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur T.; Yavoich, Brian James; Hodson, Shane M.; Godley, Franklin

    2010-01-01

    In this investigation, we compare crosstalk analysis, simulation, and measurement results for electrically short configurations. Methods include hand calculations, PSPICE simulations, Microstripes transient field solver, and empirical measurement. In total, four representative physical configurations are examined, including a single wire over a ground plane, a twisted pair over a ground plane, generator plus receptor wires inside a cylindrical conduit, and a single receptor wire inside a cylindrical conduit. Part 1 addresses the first two cases, and Part 2 addresses the final two. Agreement between the analysis methods and test data is shown to be very good.

  12. Charge-transfer excited states: Seeking a balanced and efficient wave function ansatz in variational Monte Carlo

    DOE PAGES

    Blunt, Nick S.; Neuscamman, Eric

    2017-11-16

    We present a simple and efficient wave function ansatz for the treatment of excited charge-transfer states in real-space quantum Monte Carlo methods. Using the recently-introduced variation-after-response method, this ansatz allows a crucial orbital optimization step to be performed beyond a configuration interaction singles expansion, while only requiring calculation of two Slater determinant objects. As a result, we demonstrate this ansatz for the illustrative example of the stretched LiF molecule, for a range of excited states of formaldehyde, and finally for the more challenging ethylene-tetrafluoroethylene molecule.

  13. Final report on APMP.RF-S21.F

    NASA Astrophysics Data System (ADS)

    Ishii, Masanori; Kim, Jeong Hwan; Ji, Yu; Cho, Chi Hyun; Zhang, Tim

    2018-01-01

    The supplementary comparison report APMP.RF-S21.F describes the comparison of loop antennas, which was conducted between April 2013 and January 2014. The two comparison artefacts were well-characterised active loop antennas of diameter 30 cm and 60 cm respectively, which typically operate in a frequency range from 9 kHz to 30 MHz. These antennas represent the main groups of antennas which are used around the world for EMC measurements in the frequency range below 30 MHz. There are several well-known methods for calibrating the antenna factor of these devices. The calibration systems used in this comparison for the loop antennas employed the standard magnetic field method or the three-antenna method. Despite the limitations of the algorithm, which we used to derive the reference value for each case (particularly for small samples), the actual calculated reference values seem to be reasonable. As a result, the agreement between each participant was very good in all cases. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faddegon, B.A.; Villarreal-Barajas, J.E.; Mt. Diablo Regional Cancer Center, 2450 East Street, Concord, California

    2005-11-15

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for amore » particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum of 5.6% at 21 MeV. Contributions from the collimator effect were largest for the large field size, high beam energy, and shallow depths, reaching a maximum of 4.7% at 21 MeV. Both shielding contributions and the collimator effect need to be taken into account to achieve an accuracy of 2%. FAST takes explicit account of the shielding contributions. With the collimator effect set to that of the largest field in the FAST calculation, the difference in dose on the central axis (product of ROF and PDD) between FAST and full simulation was generally under 2%. The maximum difference of 2.5% exceeded the statistical precision of the calculation by four standard deviations. This occurred at 18 MeV for the 2.5x2.5 cm{sup 2} field. The differences are due to the method used to account for the collimator effect.« less

  15. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  16. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  17. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less

  18. Calculation of the final energy demand for the Federal Republic of Germany with the simulation model MEDEE-2

    NASA Astrophysics Data System (ADS)

    Loeffler, U.; Weible, H.

    1981-08-01

    The final energy demand for the Federal Republic of Germany was calculated. The model MEDEE-2 describes, in relationship to a given distribution of the production of single industrial sectors, of energy specific values and of population development, the final energy consumption of the domestic, service industry and transportation sectors for a given region. The input data, consisting of constants and variables, and the proceeding, by which the projections for the input data of single sectors are performed, are discussed. The results of the calculations are presented and are compared. The sensitivity of single results in relation to the variation of input values is analyzed.

  19. The mold integration method for the calculation of the crystal-fluid interfacial free energy from simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinosa, J. R.; Vega, C.; Sanz, E.

    2014-10-07

    The interfacial free energy between a crystal and a fluid, γ{sub cf}, is a highly relevant parameter in phenomena such as wetting or crystal nucleation and growth. Due to the difficulty of measuring γ{sub cf} experimentally, computer simulations are often used to study the crystal-fluid interface. Here, we present a novel simulation methodology for the calculation of γ{sub cf}. The methodology consists in using a mold composed of potential energy wells to induce the formation of a crystal slab in the fluid at coexistence conditions. This induction is done along a reversible pathway along which the free energy difference betweenmore » the initial and the final states is obtained by means of thermodynamic integration. The structure of the mold is given by that of the crystal lattice planes, which allows to easily obtain the free energy for different crystal orientations. The method is validated by calculating γ{sub cf} for previously studied systems, namely, the hard spheres and the Lennard-Jones systems. Our results for the latter show that the method is accurate enough to deal with the anisotropy of γ{sub cf} with respect to the crystal orientation. We also calculate γ{sub cf} for a recently proposed continuous version of the hard sphere potential and obtain the same γ{sub cf} as for the pure hard sphere system. The method can be implemented both in Monte Carlo and Molecular Dynamics. In fact, we show that it can be easily used in combination with the popular Molecular Dynamics package GROMACS.« less

  20. Can the accuracy of multifocal intraocular lens power calculation be improved to make patients spectacle free?

    PubMed

    Ramji, Hasnain; Moore, Johnny; Moore, C B Tara; Shah, Sunil

    2016-04-01

    To optimise intraocular lens (IOL) power calculation techniques for a segmental multifocal IOL, LENTIS™ MPlus(®) (Oculentis GmbH, Berlin, Germany) and assess outcomes. A retrospective consecutive non-randomised case series of patients receiving the MPlus(®) IOL following cataract surgery or clear lens extraction was performed at a privately owned ophthalmic hospital, Midland Eye, Solihull, UK. Analysis was undertaken of 116 eyes, with uncomplicated lens replacement surgery using the LENTIS™ MPlus(®) lenses. Pre-operative biometry data were stratified into short (<22.00 mm) and long axial lengths (ALs) (≥22.00 mm). IOL power predictions were calculated with SRK/T, Holladay I, Hoffer Q, Holladay II and Haigis formulae and compared to the final manifest refraction. These were compared with the OKULIX ray tracing method and the stratification technique suggested by the Royal College of Ophthalmologists (RCOphth). Using SRK/T for long eyes and Hoffer Q for short eyes, 64% achieved postoperative subjective refractions of ≤±0.25 D, 83%≤±0.50 D and 93%≤±0.75 D, with a maximum predictive error of 1.25D. No specific calculation method performed best across all ALs; however for ALs under 22 mm Hoffer Q and Holliday I methods performed best. Excellent but equivalent overall refractive results were found between all biometry methods used in this multifocal IOL study. For eyes with ALs under 22 mm Hoffer Q and Holliday I performed best. Current techniques mean that patients are still likely to need top up glasses for certain situations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regnier, D.; Dubray, N.; Verriere, M.

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  2. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE PAGES

    Regnier, D.; Dubray, N.; Verriere, M.; ...

    2017-12-20

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  3. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei; Song, Houbing

    2017-07-12

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.

  4. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  5. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Ma, Xuefei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL) has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM) for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM) is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM) for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs. PMID:28704959

  6. HTL resummation in the light cone gauge

    NASA Astrophysics Data System (ADS)

    Chen, Qi; Hou, De-fu

    2018-04-01

    The light cone gauge with light cone variables is often used in pQCD calculations in relativistic heavy-ion collision physics. The Hard Thermal Loops (HTL) resummation is an indispensable technique for hot QCD calculation. It was developed in covariant gauges with conventional Minkowski varaiables; we shall extend this method to the light cone gauge. In the real time formalism, using the Mandelstam-Leibbrant prescription of (n·K)‑1, we calculate the transverse and longitudinal components of the gluon HTL self energy, and prove that there are no infrared divergences. With this HTL self energy, we derive the HTL resummed gluon propagator in the light cone gauge. We also calculate the quark HTL self energy and the resummed quark propagator in the light cone gauge and find it is gauge independent. As application examples, we analytically calculate the damping rates of hard quarks and gluons with the HTL resummed gluon propagator in the light cone gauge and showed that they are gauge independent. The final physical results are identical to those computed in covariant gauge, as they should be. Supported by National Natural Science Foundation of China (11375070, 11735007, 11521064)

  7. Thermodynamic analysis of onset characteristics in a miniature thermoacoustic Stirling engine

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Zhou, Gang; Li, Qing

    2013-06-01

    This paper analyzes the onset characteristics of a miniature thermoacoustic Stirling heat engine using the thermodynamic analysis method. The governing equations of components are reduced from the basic thermodynamic relations and the linear thermoacoustic theory. By solving the governing equation group numerically, the oscillation frequencies and onset temperatures are obtained. The dependences of the kinds of working gas, the length of resonator tube, the diameter of resonator tube, on the oscillation frequency are calculated. Meanwhile, the influences of hydraulic radius and mean pressure on the onset temperature for different working gas are also presented. The calculation results indicate that there exists an optimal dimensionless hydraulic radius to obtain the lowest onset temperature, whose value lies in the range of 0.30-0.35 for different working gases. Furthermore, the amplitude and phase relationship of pressures and volume flows are analyzed in the time-domain. Some experiments have been performed to validate the calculations. The calculation results agree well with the experimental values. Finally, an error analysis is made, giving the reasons that cause the errors of theoretical calculations.

  8. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  9. Dynamic Obstacle Avoidance for Unmanned Underwater Vehicles Based on an Improved Velocity Obstacle Method

    PubMed Central

    Zhang, Wei; Wei, Shilin; Teng, Yanbin; Zhang, Jianku; Wang, Xiufang; Yan, Zheping

    2017-01-01

    In view of a dynamic obstacle environment with motion uncertainty, we present a dynamic collision avoidance method based on the collision risk assessment and improved velocity obstacle method. First, through the fusion optimization of forward-looking sonar data, the redundancy of the data is reduced and the position, size and velocity information of the obstacles are obtained, which can provide an accurate decision-making basis for next-step collision avoidance. Second, according to minimum meeting time and the minimum distance between the obstacle and unmanned underwater vehicle (UUV), this paper establishes the collision risk assessment model, and screens key obstacles to avoid collision. Finally, the optimization objective function is established based on the improved velocity obstacle method, and a UUV motion characteristic is used to calculate the reachable velocity sets. The optimal collision speed of UUV is searched in velocity space. The corresponding heading and speed commands are calculated, and outputted to the motion control module. The above is the complete dynamic obstacle avoidance process. The simulation results show that the proposed method can obtain a better collision avoidance effect in the dynamic environment, and has good adaptability to the unknown dynamic environment. PMID:29186878

  10. A new method for detecting small and dim targets in starry background

    NASA Astrophysics Data System (ADS)

    Yao, Rui; Zhang, Yanning; Jiang, Lei

    2011-08-01

    Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.

  11. Laser interference patterning methods: Possibilities for high-throughput fabrication of periodic surface patterns

    NASA Astrophysics Data System (ADS)

    Lasagni, Andrés Fabián

    2017-06-01

    Fabrication of two- and three-dimensional (2D and 3D) structures in the micro- and nano-range allows a new degree of freedom to the design of materials by tailoring desired material properties and, thus, obtaining a superior functionality. Such complex designs are only possible using novel fabrication techniques with high resolution, even in the nanoscale range. Starting from a simple concept, transferring the shape of an interference pattern directly to the surface of a material, laser interferometric processing methods have been continuously developed. These methods enable the fabrication of repetitive periodic arrays and microstructures by irradiation of the sample surface with coherent beams of light. This article describes the capabilities of laser interference lithographic methods for the treatment of both photoresists and solid materials. Theoretical calculations are used to calculate the intensity distributions of patterns that can be realized by changing the number of interfering laser beams, their polarization, intensity and phase. Finally, different processing systems and configurations are described and, thus, demonstrating the possibility for the fast and precise tailoring of material surface microstructures and topographies on industrial relevant scales as well as several application cases for both methods.

  12. Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis

    NASA Astrophysics Data System (ADS)

    Okada, Yukihiro; Kawase, Yoshihiro

    This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.

  13. Modern Approaches to the Computation of the Probability of Target Detection in Cluttered Environments

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.

    The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.

  14. Use of the Priestley-Taylor evaporation equation for soil water limited conditions in a small forest clearcut

    USGS Publications Warehouse

    Flint, A.L.; Childs, S.W.

    1991-01-01

    The Priestley-Taylor equation, a simplification of the Penman equation, was used to allow calculations of evapotranspiration under conditions where soil water supply limits evapotranspiration. The Priestley-Taylor coefficient, ??, was calculated to incorporate an exponential decrease in evapotranspiration as soil water content decreases. The method is appropriate for use when detailed meteorological measurements are not available. The data required to determine the parameter for the ?? coefficient are net radiation, soil heat flux, average air temperature, and soil water content. These values can be obtained from measurements or models. The dataset used in this report pertains to a partially vegetated clearcut forest site in southwest Oregon with soil depths ranging from 0.48 to 0.70 m and weathered bedrock below that. Evapotranspiration was estimated using the Bowen ratio method, and the calculated Priestley-Taylor coefficient was fitted to these estimates by nonlinear regression. The calculated Priestley-Taylor coefficient (?????) was found to be approximately 0.9 when the soil was near field capacity (0.225 cm3 cm-3). It was not until soil water content was less than 0.14 cm3 cm-3 that soil water supply limited evapotranspiration. The soil reached a final residual water content near 0.05 cm3 cm-3 at the end of the growing season. ?? 1991.

  15. Establishment and verification of three-dimensional dynamic model for heavy-haul train-track coupled system

    NASA Astrophysics Data System (ADS)

    Liu, Pengfei; Zhai, Wanming; Wang, Kaiyun

    2016-11-01

    For the long heavy-haul train, the basic principles of the inter-vehicle interaction and train-track dynamic interaction are analysed firstly. Based on the theories of train longitudinal dynamics and vehicle-track coupled dynamics, a three-dimensional (3-D) dynamic model of the heavy-haul train-track coupled system is established through a modularised method. Specifically, this model includes the subsystems such as the train control, the vehicle, the wheel-rail relation and the line geometries. And for the calculation of the wheel-rail interaction force under the driving or braking conditions, the large creep phenomenon that may occur within the wheel-rail contact patch is considered. For the coupler and draft gear system, the coupler forces in three directions and the coupler lateral tilt angles in curves are calculated. Then, according to the characteristics of the long heavy-haul train, an efficient solving method is developed to improve the computational efficiency for such a large system. Some basic principles which should be followed in order to meet the requirement of calculation accuracy are determined. Finally, the 3-D train-track coupled model is verified by comparing the calculated results with the running test results. It is indicated that the proposed dynamic model could simulate the dynamic performance of the heavy-haul train well.

  16. An angular biasing method using arbitrary convex polyhedra for Monte Carlo radiation transport calculations

    DOE PAGES

    Kulesza, Joel A.; Solomon, Clell J.; Kiedrowski, Brian C.

    2018-01-02

    This paper presents a new method for performing angular biasing in Monte Carlo radiation transport codes using arbitrary convex polyhedra to define regions of interest toward which to project particles (DXTRAN regions). The method is derived and is implemented using axis-aligned right parallelepipeds (AARPPs) and arbitrary convex polyhedra. Attention is also paid to possible numerical complications and areas for future refinement. A series of test problems are executed with void, purely absorbing, purely scattering, and 50% absorbing/50% scattering materials. For all test problems tally results using AARPP and polyhedral DXTRAN regions agree with analog and/or spherical DXTRAN results within statisticalmore » uncertainties. In cases with significant scattering the figure of merit (FOM) using AARPP or polyhedral DXTRAN regions is lower than with spherical regions despite the ability to closely fit the tally region. This is because spherical DXTRAN processing is computationally less expensive than AARPP or polyhedral DXTRAN processing. Thus, it is recommended that the speed of spherical regions be considered versus the ability to closely fit the tally region with an AARPP or arbitrary polyhedral region. It is also recommended that short calculations be made prior to final calculations to compare the FOM for the various DXTRAN geometries because of the influence of the scattering behavior.« less

  17. Analysis by the Residual Method for Estimate Market Value of Land on the Areas with Mining Exploitation in Subsoil under Future New Building

    NASA Astrophysics Data System (ADS)

    Gwozdz-Lason, Monika

    2017-12-01

    This paper attempts to answer some of the following questions: what is the main selling advantage of a plot of land on the areas with mining exploitation? which attributes influence on market value the most? and how calculate the mining influence in subsoil under future new building as market value of plot with commercial use? This focus is not accidental, as the paper sets out to prove that the subsoil load bearing capacity, as directly inferred from the local geotechnical properties with mining exploitation, considerably influences the market value of this type of real estate. Presented in this elaborate analysis and calculations, are part of the ongoing development works which aimed at suggesting a new technology and procedures for estimating the value of the land belonging to the third category geotechnical. Analysed the question was examined both in terms of the theoretical and empirical. On the basis of the analysed code calculations in residual method, numerical, statistical and econometric defined results and final conclusions. A market analysis yielded a group of subsoil stabilization costs which depend on the mining operations interaction, subsoil parameters, type of the contemplated structure, its foundations, selected stabilization method, its overall area and shape.

  18. A novel Gaussian-Sinc mixed basis set for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.

    2015-08-14

    A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less

  19. Development, validation, and implementation of a patient-specific Monte Carlo 3D internal dosimetry platform

    NASA Astrophysics Data System (ADS)

    Besemer, Abigail E.

    Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre-clinical and clinical patients and large dosimetric differences resulted when using conventional organ-level methods and the patient-specific voxelized methods described in this work. The dosimetric impact of various steps in the 3D voxelized dosimetry process were evaluated including quantitative imaging acquisition, image coregistration, voxel resampling, ROI contouring, CT-based material segmentation, and pharmacokinetic fitting. Finally, a multi-objective treatment planning optimization framework was developed for multi-radiopharmaceutical combination therapies.

  20. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.« less

  1. Determination of krypton diffusion coefficients in uranium dioxide using atomic scale calculations

    DOE PAGES

    Vathonne, Emerson; Andersson, David Ragnar Anders; Freyss, Michel; ...

    2016-12-16

    We present a study of the diffusion of krypton in UO 2 using atomic scale calculations combined with diffusion models adapted to the system studied. The migration barriers of the elementary mechanisms for interstitial or vacancy assisted migration are calculated in the DFT + U framework using the nudged elastic band method. The attempt frequencies are obtained from the phonon modes of the defect at the initial and saddle points using empirical potential methods. The diffusion coefficients of Kr in UO 2 are then calculated by combining this data with diffusion models accounting for the concentration of vacancies and themore » interaction of vacancies with Kr atoms. We determined the preferred mechanism for Kr migration and the corresponding diffusion coefficient as a function of the oxygen chemical potential μ O or nonstoichiometry. For very hypostoichiometric (or U-rich) conditions, the most favorable mechanism is interstitial migration. For hypostoichiometric UO 2, migration is assisted by the bound Schottky defect and the charged uranium vacancy, V U 4–. Around stoichiometry, migration assisted by the charged uranium–oxygen divacancy (V UO 2–) and V U 4– is the favored mechanism. Finally, for hyperstoichiometric or O-rich conditions, the migration assisted by two V U 4– dominates. Kr migration is enhanced at higher μ O, and in this regime, the activation energy will be between 4.09 and 0.73 eV depending on nonstoichiometry. The experimental values available are in the latter interval. Since it is very probable that these values were obtained for at least slightly hyperstoichiometric samples, our activation energies are consistent with the experimental data, even if further experiments with precisely controlled stoichiometry are needed to confirm these results. Finally, the mechanisms and trends with nonstoichiometry established for Kr are similar to those found in previous studies of Xe.« less

  2. Stability conditions for exact-exchange Kohn-Sham methods and their relation to correlation energies from the adiabatic-connection fluctuation-dissipation theorem.

    PubMed

    Bleiziffer, Patrick; Schmidtel, Daniel; Görling, Andreas

    2014-11-28

    The occurrence of instabilities, in particular singlet-triplet and singlet-singlet instabilities, in the exact-exchange (EXX) Kohn-Sham method is investigated. Hessian matrices of the EXX electronic energy with respect to the expansion coefficients of the EXX effective Kohn-Sham potential in an auxiliary basis set are derived. The eigenvalues of these Hessian matrices determine whether or not instabilities are present. Similar as in the corresponding Hartree-Fock case instabilities in the EXX method are related to symmetry breaking of the Hamiltonian operator for the EXX orbitals. In the EXX methods symmetry breaking can easily be visualized by displaying the local multiplicative exchange potential. Examples (N2, O2, and the polyyne C10H2) for instabilities and symmetry breaking are discussed. The relation of the stability conditions for EXX methods to approaches calculating the Kohn-Sham correlation energy via the adiabatic-connection fluctuation-dissipation (ACFD) theorem is discussed. The existence or nonexistence of singlet-singlet instabilities in an EXX calculation is shown to indicate whether or not the frequency-integration in the evaluation of the correlation energy is singular in the EXX-ACFD method. This method calculates the Kohn-Sham correlation energy through the ACFD theorem theorem employing besides the Coulomb kernel also the full frequency-dependent exchange kernel and yields highly accurate electronic energies. For the case of singular frequency-integrands in the EXX-ACFD method a regularization is suggested. Finally, we present examples of molecular systems for which the self-consistent field procedure of the EXX as well as the Hartree-Fock method can converge to more than one local minimum depending on the initial conditions.

  3. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  4. Computational predictions of stereochemistry in asymmetric thiazolium- and triazolium-catalyzed benzoin condensations

    PubMed Central

    Dudding, Travis; Houk, Kendall N.

    2004-01-01

    The catalytic asymmetric thiazolium- and triazolium-catalyzed benzoin condensations of aldehydes and ketones were studied with computational methods. Transition-state geometries were optimized by using Morokuma's IMOMO [integrated MO (molecular orbital) + MO method] variation of ONIOM (n-layered integrated molecular orbital method) with a combination of B3LYP/6–31G(d) and AM1 levels of theory, and final transition-state energies were computed with single-point B3LYP/6–31G(d) calculations. Correlations between experiment and theory were found, and the origins of stereoselection were identified. Thiazolium catalysts were predicted to be less selective then triazolium catalysts, a trend also found experimentally. PMID:15079058

  5. Spacecraft angular velocity estimation algorithm for star tracker based on optical flow techniques

    NASA Astrophysics Data System (ADS)

    Tang, Yujie; Li, Jian; Wang, Gangyi

    2018-02-01

    An integrated navigation system often uses the traditional gyro and star tracker for high precision navigation with the shortcomings of large volume, heavy weight and high-cost. With the development of autonomous navigation for deep space and small spacecraft, star tracker has been gradually used for attitude calculation and angular velocity measurement directly. At the same time, with the dynamic imaging requirements of remote sensing satellites and other imaging satellites, how to measure the angular velocity in the dynamic situation to improve the accuracy of the star tracker is the hotspot of future research. We propose the approach to measure angular rate with a nongyro and improve the dynamic performance of the star tracker. First, the star extraction algorithm based on morphology is used to extract the star region, and the stars in the two images are matched according to the method of angular distance voting. The calculation of the displacement of the star image is measured by the improved optical flow method. Finally, the triaxial angular velocity of the star tracker is calculated by the star vector using the least squares method. The method has the advantages of fast matching speed, strong antinoise ability, and good dynamic performance. The triaxial angular velocity of star tracker can be obtained accurately with these methods. So, the star tracker can achieve better tracking performance and dynamic attitude positioning accuracy to lay a good foundation for the wide application of various satellites and complex space missions.

  6. Sequential Change of Wound Calculated by Image Analysis Using a Color Patch Method during a Secondary Intention Healing.

    PubMed

    Yang, Sejung; Park, Junhee; Lee, Hanuel; Kim, Soohyun; Lee, Byung-Uk; Chung, Kee-Yang; Oh, Byungho

    2016-01-01

    Photographs of skin wounds have the most important information during the secondary intention healing (SIH). However, there is no standard method for handling those images and analyzing them efficiently and conveniently. To investigate the sequential changes of SIH depending on the body sites using a color patch method. We performed retrospective reviews of 30 patients (11 facial and 19 non-facial areas) who underwent SIH for the restoration of skin defects and captured sequential photographs with a color patch which is specially designed for automatically calculating defect and scar sizes. Using a novel image analysis method with a color patch, skin defects were calculated more accurately (range of error rate: -3.39% ~ + 3.05%). All patients had smaller scar size than the original defect size after SIH treatment (rates of decrease: 18.8% ~ 86.1%), and facial area showed significantly higher decrease rate compared with the non-facial area such as scalp and extremities (67.05 ± 12.48 vs. 53.29 ± 18.11, P < 0.05). From the result of estimating the date corresponding to the half of the final decrement, all of the facial area showed improvements within two weeks (8.45 ± 3.91), and non-facial area needed 14.33 ± 9.78 days. From the results of sequential changes of skin defects, SIH can be recommended as an alternative treatment method for restoration with more careful dressing for initial two weeks.

  7. Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth

    NASA Astrophysics Data System (ADS)

    Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.

    2017-12-01

    We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.

  8. Research on Coordinate Transformation Method of Gb-Sar Image Supported by 3d Laser Scanning Technology

    NASA Astrophysics Data System (ADS)

    Wang, P.; Xing, C.

    2018-04-01

    In the image plane of GB-SAR, identification of deformation distribution is usually carried out by artificial interpretation. This method requires analysts to have adequate experience of radar imaging and target recognition, otherwise it can easily cause false recognition of deformation target or region. Therefore, it is very meaningful to connect two-dimensional (2D) plane coordinate system with the common three-dimensional (3D) terrain coordinate system. To improve the global accuracy and reliability of the transformation from 2D coordinates of GB-SAR images to local 3D coordinates, and overcome the limitation of traditional similarity transformation parameter estimation method, 3D laser scanning data is used to assist the transformation of GB-SAR image coordinates. A straight line fitting method for calculating horizontal angle was proposed in this paper. After projection into a consistent imaging plane, we can calculate horizontal rotation angle by using the linear characteristics of the structure in radar image and the 3D coordinate system. Aided by external elevation information by 3D laser scanning technology, we completed the matching of point clouds and pixels on the projection plane according to the geometric projection principle of GB-SAR imaging realizing the transformation calculation of GB-SAR image coordinates to local 3D coordinates. Finally, the effectiveness of the method is verified by the GB-SAR deformation monitoring experiment on the high slope of Geheyan dam.

  9. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  10. The NNLO QCD soft function for 1-jettiness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, John M.; Ellis, R. Keith; Mondini, Roberto

    We calculate the soft function for the global event variable 1-jettiness at next-to-next-to-leading order (NNLO) in QCD. We focus specifically on the non-Abelian contribution, which, unlike the Abelian part, is not determined by the next-to-leading order result. The calculation uses the known general forms for the emission of one and two soft partons and is performed using a sector-decomposition method that is spelled out in detail. Results are presented in the form of numerical fits to the 1-jettiness soft function for LHC kinematics (as a function of the angle between the incoming beams and the final-state jet) and for genericmore » kinematics (as a function of three independent angles). These fits represent one of the needed ingredients for NNLO calculations that use the N-jettiness event variable to handle infrared singularities.« less

  11. Λ N → NN EFT potentials and hypertriton non-mesonic weak decay

    NASA Astrophysics Data System (ADS)

    Pérez-Obiol, Axel; Entem, David R.; Nogga, Andreas

    2018-05-01

    The potential for the Λ N → NN weak transition, the main responsible for the non-mesonic weak decay of hypernuclei, has been developed within the framework of effective field theory (EFT) up to next-to-leading order (NLO). The leading order (LO) and NLO contributions have been calculated in both momentum and coordinate space, and have been organised into the different operators which mediate the N → NN transition. We compare the ranges of the one-meson and two-pion exchanges for each operator. The non-mesonic weak decay of the hypertriton has been computed within the plane-wave approximation using the LO weak potential and modern strong EFT NN potentials. Formally, two methods to calculate the final state interactions among the decay products are presented. We briefly comment on the calculation of the {}{{Λ }}{}3H{\\to }3 He+{π }- mesonic weak decay.

  12. The NNLO QCD soft function for 1-jettiness

    DOE PAGES

    Campbell, John M.; Ellis, R. Keith; Mondini, Roberto; ...

    2018-03-19

    We calculate the soft function for the global event variable 1-jettiness at next-to-next-to-leading order (NNLO) in QCD. We focus specifically on the non-Abelian contribution, which, unlike the Abelian part, is not determined by the next-to-leading order result. The calculation uses the known general forms for the emission of one and two soft partons and is performed using a sector-decomposition method that is spelled out in detail. Results are presented in the form of numerical fits to the 1-jettiness soft function for LHC kinematics (as a function of the angle between the incoming beams and the final-state jet) and for genericmore » kinematics (as a function of three independent angles). These fits represent one of the needed ingredients for NNLO calculations that use the N-jettiness event variable to handle infrared singularities.« less

  13. Model for the Operation of a Monolayer MoS2 Thin-Film Transistor with Charges Trapped near the Channel Interface

    NASA Astrophysics Data System (ADS)

    Hur, Ji-Hyun; Park, Junghak; Kim, Deok-kee; Jeon, Sanghun

    2017-04-01

    We propose a model that describes the operation characteristics of a two-dimensional electron gas (2DEG) in a monolayer transition-metal dichalcogenide thin-film transistor (TFT) having trapped charges near the channel interface. We calculate the drift mobility of the carriers scattered by charged defects located in the channel or near the channel interfaces. The calculated drift mobility is a function of the 2DEG areal density of interface traps. Finally, we calculate the model transfer (ID-VG S ) and output (ID-VS D ) characteristics and verify them by comparing with the experimental results performed with monolayer MoS2 TFTs. We find the modeled results to be excellently consistent with the experiments. This proposed model can be utilized for measuring the interface-trapped charge and trap site densities from the measured transfer curves directly, avoiding more complicated and expensive measurement methods.

  14. Evaluation of ultraviolet radiation, ozone and aerosol interactions in the troposphere using automatic differentiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, G.R.; Potra, F.

    1998-10-06

    A major goal of this research was to quantify the interactions between UVR, ozone and aerosols. One method of quantification was to calculate sensitivity coefficients. A novel aspect of this work was the use of Automatic Differentiation software to calculate the sensitivities. The authors demonstrated the use of ADIFOR for the first time in a dimensional framework. Automatic Differentiation was used to calculate such quantities as: sensitivities of UV-B fluxes to changes in ozone and aerosols in the stratosphere and the troposphere; changes in ozone production/destruction rates to changes in UV-B flux; aerosol properties including loading, scattering properties (including relativemore » humidity effects), and composition (mineral dust, soot, and sulfate aerosol, etc.). The combined radiation/chemistry model offers an important test of the utility of Automatic Differentiation as a tool in atmospheric modeling.« less

  15. The converse approach to NMR chemical shifts from first-principles: application to finite and infinite aromatic compounds

    NASA Astrophysics Data System (ADS)

    Thonhauser, T.; Ceresoli, D.; Marzari, N.

    2009-03-01

    We present first-principles, density-functional theory calculations of the NMR chemical shifts for polycyclic aromatic hydrocarbons, starting with benzene and increasing sizes up to the one- and two-dimensional infinite limits of graphene ribbons and sheets. Our calculations are performed using a combination of the recently developed theory of orbital magnetization in solids, and a novel approach to NMR calculations where chemical shifts are obtained from the derivative of the orbital magnetization with respect to a microscopic, localized magnetic dipole. Using these methods we study on equal footing the ^1H and ^13C shifts in benzene, pyrene, coronene, in naphthalene, anthracene, naphthacene, and pentacene, and finally in graphene, graphite, and an infinite graphene ribbon. Our results show very good agreement with experiments and allow us to characterize the trends for the chemical shifts as a function of system size.

  16. Solution of the Skyrme-Hartree–Fock–Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VIII) HFODD (v2.73y): A new version of the program

    DOE PAGES

    Schunck, N.; Dobaczewski, J.; Satuła, W.; ...

    2017-03-27

    Here, we describe the new version (v2.73y) of the code hfodd which solves the nuclear Skyrme Hartree–Fock or Skyrme Hartree–Fock–Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following new features: (i) full proton–neutron mixing in the particle–hole channel for Skyrme functionals, (ii) the Gogny force in both particle–hole and particle–particle channels, (iii) linear multi-constraint method at finite temperature, (iv) fission toolkit including the constraint on the number of particles in the neck between two fragments, calculation of the interaction energy between fragments, and calculation of the nuclear and Coulomb energy ofmore » each fragment, (v) the new version 200d of the code hfbtho, together with an enhanced interface between HFBTHO and HFODD, (vi) parallel capabilities, significantly extended by adding several restart options for large-scale jobs, (vii) the Lipkin translational energy correction method with pairing, (viii) higher-order Lipkin particle-number corrections, (ix) interface to a program plotting single-particle energies or Routhians, (x) strong-force isospin-symmetry-breaking terms, and (xi) the Augmented Lagrangian Method for calculations with 3D constraints on angular momentum and isospin. Finally, an important bug related to the calculation of the entropy at finite temperature and several other little significant errors of the previous published version were corrected.« less

  17. Solution of the Skyrme-Hartree–Fock–Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VIII) HFODD (v2.73y): A new version of the program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunck, N.; Dobaczewski, J.; Satuła, W.

    Here, we describe the new version (v2.73y) of the code hfodd which solves the nuclear Skyrme Hartree–Fock or Skyrme Hartree–Fock–Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following new features: (i) full proton–neutron mixing in the particle–hole channel for Skyrme functionals, (ii) the Gogny force in both particle–hole and particle–particle channels, (iii) linear multi-constraint method at finite temperature, (iv) fission toolkit including the constraint on the number of particles in the neck between two fragments, calculation of the interaction energy between fragments, and calculation of the nuclear and Coulomb energy ofmore » each fragment, (v) the new version 200d of the code hfbtho, together with an enhanced interface between HFBTHO and HFODD, (vi) parallel capabilities, significantly extended by adding several restart options for large-scale jobs, (vii) the Lipkin translational energy correction method with pairing, (viii) higher-order Lipkin particle-number corrections, (ix) interface to a program plotting single-particle energies or Routhians, (x) strong-force isospin-symmetry-breaking terms, and (xi) the Augmented Lagrangian Method for calculations with 3D constraints on angular momentum and isospin. Finally, an important bug related to the calculation of the entropy at finite temperature and several other little significant errors of the previous published version were corrected.« less

  18. Algorithm for calculations of asymptotic nuclear coefficients using phase-shift data for charged-particle scattering

    NASA Astrophysics Data System (ADS)

    Orlov, Yu. V.; Irgaziev, B. F.; Nabi, Jameel-Un

    2017-08-01

    A new algorithm for the asymptotic nuclear coefficients calculation, which we call the Δ method, is proved and developed. This method was proposed by Ramírez Suárez and Sparenberg (arXiv:1602.04082.) but no proof was given. We apply it to the bound state situated near the channel threshold when the Sommerfeld parameter is quite large within the experimental energy region. As a result, the value of the conventional effective-range function Kl(k2) is actually defined by the Coulomb term. One of the resulting effects is a wrong description of the energy behavior of the elastic scattering phase shift δl reproduced from the fitted total effective-range function Kl(k2) . This leads to an improper value of the asymptotic normalization coefficient (ANC) value. No such problem arises if we fit only the nuclear term. The difference between the total effective-range function and the Coulomb part at real energies is the same as the nuclear term. Then we can proceed using just this Δ method to calculate the pole position values and the ANC. We apply it to the vertices 4He+12C ↔16O and 3He+4He↔7Be . The calculated ANCs can be used to find the radiative capture reaction cross sections of the transfers to the 16O bound final states as well as to the 7Be.

  19. Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials

    NASA Astrophysics Data System (ADS)

    Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.

    2018-01-01

    The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.

  20. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  1. DFT computational analysis of piracetam.

    PubMed

    Rajesh, P; Gunasekaran, S; Seshadri, S; Gnanasambandan, T

    2014-11-11

    Density functional theory calculation with B3LYP using 6-31G(d,p) and 6-31++G(d,p) basis set have been used to determine ground state molecular geometries. The first order hyperpolarizability (β0) and related properties (β, α0 and Δα) of piracetam is calculated using B3LYP/6-31G(d,p) method on the finite-field approach. The stability of molecule has been analyzed by using NBO/NLMO analysis. The calculation of first hyperpolarizability shows that the molecule is an attractive molecule for future applications in non-linear optics. Molecular electrostatic potential (MEP) at a point in the space around a molecule gives an indication of the net electrostatic effect produced at that point by the total charge distribution of the molecule. The calculated HOMO and LUMO energies show that charge transfer occurs within these molecules. Mulliken population analysis on atomic charge is also calculated. Because of vibrational analysis, the thermodynamic properties of the title compound at different temperatures have been calculated. Finally, the UV-Vis spectra and electronic absorption properties are explained and illustrated from the frontier molecular orbitals. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  3. Transport properties in mixtures involving carbon dioxide at low and moderate density: test of several intermolecular potential energies and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Moghadasi, Jalil; Yousefi, Fakhri; Papari, Mohammad Mehdi; Faghihi, Mohammad Ali; Mohsenipour, Ali Asghar

    2009-09-01

    It is the purpose of this paper to extract unlike intermolecular potential energies of five carbon dioxide-based binary gas mixtures including CO2-He, CO2-Ne, CO2-Ar, CO2-Kr, and CO2-Xe from viscosity data and compare the calculated potentials with other models potential energy reported in literature. Then, dilute transport properties consisting of viscosity, diffusion coefficient, thermal diffusion factor, and thermal conductivity of aforementioned mixtures are calculated from the calculated potential energies and compared with literature data. Rather accurate correlations for the viscosity coefficient of afore-cited mixtures embracing the temperature range 200 K < T < 3273.15 K is reproduced from the present unlike intermolecular potentials energy. Our estimated accuracies for the viscosity are to within ±2%. In addition, the calculated potential energies are used to present smooth correlations for other transport properties. The accuracies of the binary diffusion coefficients are of the order of ±3%. Finally, the unlike interaction energy and the calculated low density viscosity have been employed to calculate high density viscosities using Vesovic-Wakeham method.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontes, Christopher J.; Zhang, Hong Lin

    We calculated relativistic distorted-wave collision strength for all possible Δn=0 transitions, where n denotes the valence shell of the ground level, in the 67 Li-like, F-like and Na-like ions with Z in the range 26 ≤ Z ≤92. This choice produces 3 transitions with n=2 in the Li-like and F-like ions, and 10 transitions with n=3 in the Na-like ions. Moreover, for the Li-like and F-like ions, the calculations were made for the six final, or scattered, electron energies E'=0.008,0.04,0.10,0.21,0.41, and 0.75, where E' is in units of Zmore » $$2\\atop{eff}$$ Ry with Z eff = Z- 1.66 for Li-like ions and Z eff= Z- 6.667 for F-like ions. For the Na-like ions, the calculations were made for the six final electron energies E'=0.0025,0.015,0.04,0.10,0.21, and 0.40, with Z eff = Z- 8.34. In the present calculations, an improved “top-up” method, which employs relativistic plane waves, was used to obtain the high partial-wave contribution for each transition, in contrast to the partial-relativistic Coulomb–Bethe approximation used in previous works by Zhang, Sampson and Fontes [H.L. Zhang, D.H. Sampson, C.J. Fontes, At. Data Nucl. Data Tables 44 (1990) 31; H.L. Zhang, D.H. Sampson, C.J. Fontes, At. Data Nucl. Data Tables 48 (1991) 25; D.H. Sampson, H.L. Zhang, C.J. Fontes, At. Data Nucl. Data Tables 44 (1990) 209]. In those previous works, collision strengths were also provided for Li-, F- and Na-like ions, but for a more comprehensive set of transitions. Finally, the collision strengths covered in the present work should be more accurate than the corresponding data given in those previous works and are presented here to replace those earlier results.« less

  5. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  6. Design and Performance Calculations of a Propeller for Very High Altitude Flight. Degree awarded by Case Western Univ.

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    1998-01-01

    Reported here is a design study of a propeller for a vehicle capable of subsonic flight in Earth's stratosphere. All propellers presented were required to absorb 63.4 kW (85 hp) at 25.9 km (85,000 ft) while aircraft cruise velocity was maintained at Mach 0.40. To produce the final design, classic momentum and blade-element theories were combined with two and three-dimensional results from the Advanced Ducted Propfan Analysis Code (ADPAC), a numerical Navier-Stokes analysis code. The Eppler 387 airfoil was used for each of the constant section propeller designs compared. Experimental data from the Langley Low-Turbulence Pressure Tunnel was used in the strip theory design and analysis programs written. The experimental data was also used to validate ADPAC at a Reynolds numbers of 60,000 and a Mach number of 0.20. Experimental and calculated surface pressure coefficients are compared for a range of angles of attack. Since low Reynolds number transonic experimental data was unavailable, ADPAC was used to generate two-dimensional section performance predictions for Reynolds numbers of 60,000 and 100,000 and Mach numbers ranging from 0.45 to 0.75. Surface pressure coefficients are presented for selected angles of attack. in addition to the variation of lift and drag coefficients at each flow condition. A three-dimensional model of the final design was made which ADPAC used to calculated propeller performance. ADPAC performance predictions were compared with strip-theory calculations at design point. Propeller efficiency predicted by ADPAC was within 1.5% of that calculated by strip theory methods, although ADPAC predictions of thrust, power, and torque coefficients were approximately 5% lower than the strip theory results. Simplifying assumptions made in the strip theory account for the differences seen.

  7. Thin-film optical shutter. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matlow, S.L.

    1981-02-01

    A specific embodiment of macroconjugated macromolecules, the poly (p-phenylene)'s, has been chosen as the one most likely to meet all of the requirements of the Thin Film Optical Shutter project (TFOS). The reason for this choice is included. In order to be able to make meaningful calculations of the thermodynamic and optical properties of the poly (p-phenylene)'s a new quantum mechanical method was developed - Equilibrium Bond Length (EBL) Theory. Some results of EBL Theory are included.

  8. Chemical calculations on Cray computers

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.

    1989-01-01

    The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.

  9. Gravity field of Jupiter’s moon Amalthea and the implication on a spacecraft trajectory

    NASA Astrophysics Data System (ADS)

    Weinwurm, Gudrun

    2006-01-01

    Before its final plunge into Jupiter in September 2003, GALILEO made a last 'visit' to one of Jupiter's moons - Amalthea. This final flyby of the spacecraft's successful mission occurred on November 5, 2002. In order to analyse the spacecraft data with respect to Amalthea's gravity field, interior models of the moon had to be provided. The method used for this approach is based on the numerical integration of infinitesimal volume elements of a three-axial ellipsoid in elliptic coordinates. To derive the gravity field coefficients of the body, the second method of Neumann was applied. Based on the spacecraft trajectory data provided by the Jet Propulsion Laboratory, GALILEO's velocity perturbations at closest approach could be calculated. The harmonic coefficients of Amalthea's gravity field have been derived up to degree and order six, for both homogeneous and reasonable heterogeneous cases. Founded on these numbers the impact on the trajectory of GALILEO was calculated and compared to existing Doppler data. Furthermore, predictions for future spacecraft flybys were derived. No two-way Doppler-data was available during the flyby and the harmonic coefficients of the gravity field are buried in the one-way Doppler-noise. Nevertheless, the generated gravity field models reflect the most likely interior structure of the moon and can be a basis for further exploration of the Jovian system.

  10. Using the auxiliary camera for system calibration of 3D measurement by digital speckle

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2014-06-01

    The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.

  11. Simulating carbon sequestration using cellular automata and land use assessment for Karaj, Iran

    NASA Astrophysics Data System (ADS)

    Khatibi, Ali; Pourebrahim, Sharareh; Mokhtar, Mazlin Bin

    2018-06-01

    Carbon sequestration has been proposed as a means of slowing the atmospheric and marine accumulation of greenhouse gases. This study used observed and simulated land use/cover changes to investigate and predict carbon sequestration rates in the city of Karaj. Karaj, a metropolis of Iran, has undergone rapid population expansion and associated changes in recent years, and these changes make it suitable for use as a case study for rapidly expanding urban areas. In particular, high quality agricultural space, green space and gardens have rapidly transformed into industrial, residential and urban service areas. Five classes of land use/cover (residential, agricultural, rangeland, forest and barren areas) were considered in the study; vegetation and soil samples were taken from 20 randomly selected locations. The level of carbon sequestration was determined for the vegetation samples by calculating the amount of organic carbon present using the dry plant weight method, and for soil samples by using the method of Walkley and Black. For each area class, average values of carbon sequestration in vegetation and soil samples were calculated to give a carbon sequestration index. A cellular automata approach was used to simulate changes in the classes. Finally, the carbon sequestration indices were combined with simulation results to calculate changes in carbon sequestration for each class. It is predicted that, in the 15 year period from 2014 to 2029, much agricultural land will be transformed into residential land, resulting in a severe reduction in the level of carbon sequestration. Results from this study indicate that expansion of forest areas in urban counties would be an effective means of increasing the levels of carbon sequestration. Finally, future opportunities to include carbon sequestration into the simulation of land use/cover changes are outlined.

  12. High efficiency vapor-fed AMTEC system for direct conversion. Appendices for final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, W.G.; Bland, J.J.

    1997-05-23

    This report consists of four appendices for the final report. They are: Appendix A: 700 C Vapor-Fed AMTEC Cell Calculations; Appendix B: 700 C Vapor-Fed AMTEC Cell Parts Drawings; Appendix C: 800 C Vapor-Fed AMTEC Cell Calculations; and Appendix D: 800 C Wick-Pumped AMTEC Cell System Design.

  13. AC Power at Power Frequencies: Bilateral comparison between SASO NMCC and TÜBİTAK UME

    NASA Astrophysics Data System (ADS)

    Çaycı, Hüseyin; Yılmaz, Özlem; AlRobaish, Abdullah M.; AlAnazi, Shafi S.; AlAyali, Ahmed R.; AlRumie, Rashed A.

    2018-01-01

    A supplementary bilateral comparison measurement on AC Power at 50/60 Hz between SASO NMCC (GULFMET) and TÜBİTAK UME (EURAMET) was performed with the primary power standards of each partner. Measurement methods and setups which are very similar of the participants, measurement results, calculation of differences in the results, evaluation of uncertainties are given within this report. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. The evolution of human mobility based on the public goods game

    NASA Astrophysics Data System (ADS)

    Yan, Shiqing

    2017-07-01

    We explore the evolution of human mobility behavior based on public goods game. By using mean field method, the population distribution in different regions is theoretical calculated. Numerical simulation results show that the correlation between the region's degree and its final population is not significant under a larger human migration rate. Human mobility could effectively promote cooperative behavior and the population balance of different regions. Therefore, encouraging individuals to migrate may increase the total benefits of the whole society. Moreover, increasing the cooperation cost could reduce the number of cooperators, and that would happen to the correlation between the region's degree and its final population. The results indicate the total population could not dramatically rise with the region's degree under an unfair society.

  15. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    PubMed

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  16. Structural, spectral analysis and DNA studies of heterocyclic thiosemicarbazone ligand and its Cr(III), Fe(III), Co(II) Hg(II), and U(VI) complexes

    NASA Astrophysics Data System (ADS)

    Yousef, T. A.; Abu El-Reash, G. M.; El Morshedy, R. M.

    2013-08-01

    The paper presents a combined experimental and computational study of novel Cr(III), Fe(III), Co(II), Hg(II) and U(VI) complexes of (E)-2-((3-hydroxynaphthalen-2-yl)methylene)-N-(pyridin-2-yl)hydrazinecarbothioamide (H2L). The ligand and its complexes have been characterized by elemental analyses, spectral (IR, UV-vis, 1H NMR and 13C NMR), magnetic and thermal studies. IR spectra show that H2L is coordinated to the metal ions in a mononegative bi or tri manner. The structures are suggested to be octahedral for all complexes except Hg(II) complex is tetrahedral. Theoretical calculations have been performed to obtain IR spectra of ligand and its complexes using AM1, MM, Zindo/1, MM+ and PM3, methods. Satisfactory theoretical-experimental agreements were achieved by MM method for the ligand and PM3 for its complexes. DOS calculations carried out by MM (ADF) method for ligand Hg complex from which we concluded that the thiol form of the ligand is more active than thione form and this explains that the most complexation take place in that form. The calculated IR vibrations of the metal complexes, using the PM3 method was the nearest method for the experimental data, and it could be used for all complexes. Also, valuable information are obtained from calculation of molecular parameters for all compounds carried out by the previous methods of calculation (electronegativity of the coordination sites, net dipole moment of the metal complexes, values of heat of formation and binding energy) which approved that the complexes are more stable than ligand. The low value of ΔE could be expected to indicate H2L molecule has high inclination to bind with the metal ions. Furthermore, the kinetic and thermodynamic parameters for the different decomposition steps were calculated using the Coats-Redfern and Horowitz-Metzger methods. Finally, the biochemical studies showed that, complex 2, 4 have powerful and complete degradation effect on DNA. For the foremost majority of cases the activity of the ligand is greatly enhanced by the presence of a metal ion. Thus presented results may be useful in design new more active or specific structures.

  17. Analysis of a new composite material for watercraft manufacturing

    NASA Astrophysics Data System (ADS)

    Wahrhaftig, Alexandre; Ribeiro, Henrique; Nascimento, Ademar; Filho, Milton

    2016-09-01

    In this paper, we investigate the properties of an alternative material for use in marine engineering, namely a rigid and light sandwich-structured composite made of expanded polystyrene and fiberglass. Not only does this material have an improved section modulus, but it is also inexpensive, light, easy to manipulate, and commercially available in various sizes. Using a computer program based on the finite element method, we calculated the hogging and sagging stresses and strains acting on a prismatic boat model composed of this material, and determined the minimum sizes and maximum permissible stresses to avoid deformation. Finally, we calculated the structural weight of the resulting vessel for comparison with another structure of comparable dimensions constructed from the commonly used core material Divinycell.

  18. An all-silicone zoom lens in an optical imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Cun-Hua

    2013-09-01

    An all-silicone zoom lens is fabricated. A tunable metal ringer is fettered around the side edge of the lens. A nylon rope linking a motor is tied, encircling the notch in the metal ringer. While the motor is operating, the rope can shrink or release to change the focal length of the lens. A calculation method is developed to obtain the focal length and the zoom ratio. The testing is carried out in succession. The testing values are compared with the calculated ones, and they tally with each other well. Finally, the imaging performance of the all-silicone lens is demonstrated. The all-silicone lens has potential uses in cellphone cameras, notebook cameras, micro monitor lenses, etc.

  19. Spin-isospin excitation of 3He with three-proton final state

    NASA Astrophysics Data System (ADS)

    Ishikawa, Souichi

    2018-01-01

    Spin-isospin excitation of the {}^3He nucleus by a proton-induced charge exchange reaction, {}^3He(p,n)ppp, at forward neutron scattering angle is studied in a plane wave impulse approximation (PWIA). In PWIA, cross sections of the reaction are written in terms of proton-neutron scattering amplitudes and response functions of the transition from {}3He to the three-proton state by spin-isospin transition operators. The response functions are calculated with realistic nucleon-nucleon potential models using a Faddeev three-body method. Calculated cross sections agree with available experimental data in substance. Possible effects arising from the uncertainty of proton-neutron amplitudes and three-nucleon interactions in the three-proton system are examined.

  20. General airplane performance

    NASA Technical Reports Server (NTRS)

    Rockfeller, W C

    1939-01-01

    Equations have been developed for the analysis of the performance of the ideal airplane, leading to an approximate physical interpretation of the performance problem. The basic sea-level airplane parameters have been generalized to altitude parameters and a new parameter has been introduced and physically interpreted. The performance analysis for actual airplanes has been obtained in terms of the equivalent ideal airplane in order that the charts developed for use in practical calculations will for the most part apply to any type of engine-propeller combination and system of control, the only additional material required consisting of the actual engine and propeller curves for propulsion unit. Finally, a more exact method for the calculation of the climb characteristics for the constant-speed controllable propeller is presented in the appendix.

  1. Nusselt number and bulk temperature in turbulent Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Bodenschatz, Eberhard; Weiss, Stephan; Shishkina, Olga; International CollaborationTurbulence Research Collaboration

    2017-11-01

    We present an algorithm to calculate the Nusselt number (Nu) in measurements of the heat transport in turbulent Rayleigh-Bénard convection under general non-Oberbeck-Boussinesq (NOB) conditions. We further critically analyze the different ways to evaluate the dependences of Nu over the Rayleigh number (Ra) and show the sensitivity of these dependences to the reference temperatures in the bulk, top and bottom boundary layers (BLs). Finally we propose a method to predict the bulk temperature and a way to calculate the reference temperatures of the top and bottom BLs and validate them against the Göttingen measurements. The work is supported by the Max Planck Society and the Deutsche Forschungsgemeinschaft (DFG) under the Grant Sh 405/4 - Heisenberg fellowship.

  2. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Jaffe, Richard; Liang, Shoudan; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2002-01-01

    We present results from several projects in the new field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. We have developed a procedure for calculating long-range effects in molecular dynamics using a plane wave expansion of the electrostatic potential. This method is expected to be highly efficient for simulating biological systems on massively parallel supercomputers. We have perform genomics analysis on a family of actin binding proteins. We have performed quantum mechanical calculations on carbon nanotubes and nucleic acids, which simulations will allow us to investigate possible sources of organic material on the early earth. Finally, we have developed a model of protobiological chemistry using neural networks.

  3. Internal variation of electron temperature in HII regions

    NASA Astrophysics Data System (ADS)

    Oliveira, V. A.

    2017-11-01

    It is usual to think that if you calculate the same physical propriety from different methods you must find the same result, or within the margin of error. However, this is not the case if you calculate the abundance of heavy elements in photoionized nebulae. In fact, it is possible to find a value at least two times bigger, according to whether you estimate from recombination lines or from collisionally excited emission lines. This is called AD problem, and since 1967 the astronomers think about it and we do not have any final conclusion yet. This work aims to bring a small light to the path of a solution of AD problem, specifically for HII regions and, perhaps, to all types of photoionized nebulae.

  4. A consistent decomposition of the redistributive, vertical, and horizontal effects of health care finance by factor components.

    PubMed

    Hierro, Luis A; Gómez-Álvarez, Rosario; Atienza, Pedro

    2014-01-01

    In studies on the redistributive, vertical, and horizontal effects of health care financing, the sum of the contributions calculated for each financial instrument does not equal the total effects. As a consequence, the final calculations tend to be overestimated or underestimated. The solution proposed here involves the adaptation of the Shapley value to achieve additive results for all the effects and reveals the relative contributions of different instruments to the change of whole-system equity. An understanding of this change would help policy makers attain equitable health care financing. We test the method with the public finance and private payments of health care systems in Denmark and the Netherlands. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Classification of trabeculae into three-dimensional rodlike and platelike structures via local inertial anisotropy.

    PubMed

    Vasilić, Branimir; Rajapakse, Chamith S; Wehrli, Felix W

    2009-07-01

    Trabecular bone microarchitecture is a significant determinant of the bone's mechanical properties and is thus of major clinical relevance in predicting fracture risk. The three-dimensional nature of trabecular bone is characterized by parameters describing scale, topology, and orientation of structural elements. However, none of the current methods calculates all three types of parameters simultaneously and in three dimensions. Here the authors present a method that produces a continuous classification of voxels as belonging to platelike or rodlike structures that determines their orientation and estimates their thickness. The method, dubbed local inertial anisotropy (LIA), treats the image as a distribution of mass density and the orientation of trabeculae is determined from a locally calculated tensor of inertia at each voxel. The orientation entropies of rods and plates are introduced, which can provide new information about microarchitecture not captured by existing parameters. The robustness of the method to noise corruption, resolution reduction, and image rotation is demonstrated. Further, the method is compared with established three-dimensional parameters including the structure-model index and topological surface-to-curve ratio. Finally, the method is applied to data acquired in a previous translational pilot study showing that the trabecular bone of untreated hypogonadal men is less platelike than that of their eugonadal peers.

  6. Application Profile Matching Method for Employees Online Recruitment

    NASA Astrophysics Data System (ADS)

    Sunarti; Rangga, Rahmadian Y.; Marlim, Yulvia Nora

    2017-12-01

    Employees is one of the determinant factors of company’s success. Thus, reliable human resources are needed to support the survival of the company. This research takes case study at PT. Asuransi Bina Dana Arta, Tbk Pekanbaru Branch. Employee recruitment system at PT. Asuransi Bina Dana Arta, Tbk Pekanbaru Branch still uses manual system as seen in application letter files file so it needs long time to determine accepted and rejected the application. For that it needs to built a system or application that allows companies in determining employees who accepted or rejected easily. Pofile Matching Method is a process of competency assessment that is done by comparing the value of written, psychological and interview test between one applicationt with other. PT. Asuransi Bina Dana Arta, Tbk Pekanbaru branch set the percentage to calculate NCF (Core Factor Value) by 60% and NSF (Secondary Factor Value) by 40%, and set the percentage to calculate the total value of written test by 40%, the total value of psycho test by 30%, and the total value of interview 30%. The final result of this study is to determine the rank or ranking of each applicant based on the greater value which, the greater that score of final result of an application get, the greater the chance of the applicant occupy a position or vacancy. Online Recruitment application uses profile matching method can help employee selection process and employee acceptance decisions quickly. This system can be viewed by directors or owners anywhere because it is online and used for other company branch

  7. Quantum Monte Carlo Calculations Applied to Magnetic Molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelhardt, Larry

    2006-01-01

    We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoidingmore » any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these ideas in hand, we then provide a detailed explanation of the current QMC method in Chapter 4. The remainder of the thesis is devoted to presenting specific results: Chapters 5 and 6 contain articles in which this method has been used to answer general questions that are relevant to broad classes of systems. Then, in Chapter 7, we provide an analysis of four different species of magnetic molecules that have recently been synthesized and studied. In all cases, comparisons between QMC calculations and experimental data allow us to distinguish a viable microscopic model and make predictions for future experiments. In Chapter 8, the infamous ''negative sign problem'' is described in detail, and we clearly indicate the limitations on QMC that are imposed by this obstacle. Finally, Chapter 9 contains a summary of the present work and the expected directions for future research.« less

  8. Thermal-hydraulic analysis capabilities and methods development at NYPA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    The operation of a nuclear power plant must be regularly supported by various thermal-hydraulic (T/H) analyses that may include final safety analysis report (FSAR) design basis calculations and licensing evaluations and conservative and best-estimate analyses. The development of in-house T/H capabilities provides the following advantages: (a) it leads to a better understanding of the plant design basis and operating characteristics; (b) methods developed can be used to optimize plant operations and enhance plant safety; (c) such a capability can be used for design reviews, checking vendor calculations, and evaluating proposed plant modifications; and (d) in-house capability reduces the cost ofmore » analysis. This paper gives an overview of the T/H capabilities and current methods development activity within the engineering department of the New York Power Authority (NYPA) and will focus specifically on reactor coolant system (RCS) transients and plant dynamic response for non-loss-of-coolant accident events. This paper describes NYPA experience in performing T/H analyses in support of pressurized water reactor plant operation.« less

  9. The frozen nucleon approximation in two-particle two-hole response functions

    DOE PAGES

    Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...

    2017-07-10

    Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less

  10. The frozen nucleon approximation in two-particle two-hole response functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.

    Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less

  11. A Z-number-based decision making procedure with ranking fuzzy numbers method

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2014-12-01

    The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.

  12. A Liquid Level Measurement Technique Outside a Sealed Metal Container Based on Ultrasonic Impedance and Echo Energy

    PubMed Central

    Zhang, Bin; Wei, Yue-Juan; Liu, Wen-Yi; Zhang, Yan-Jun; Yao, Zong; Zhao, Li-Hui; Xiong, Ji-Jun

    2017-01-01

    The proposed method for measuring the liquid level focuses on the ultrasonic impedance and echo energy inside a metal wall, to which the sensor is attached directly, not on ultrasonic waves that penetrate the gas–liquid medium of a container. Firstly, by analyzing the sound field distribution characteristics of the sensor in a metal wall, this paper proposes the concept of an "energy circle" and discusses how to calculate echo energy under three different states in detail. Meanwhile, an ultrasonic transmitting and receiving circuit is designed to convert the echo energy inside the energy circle into its equivalent electric power. Secondly, in order to find the two critical states of the energy circle in the process of liquid level detection, a program is designed to help with calculating two critical positions automatically. Finally, the proposed method is evaluated through a series of experiments, and the experimental results indicate that the proposed method is effective and accurate in calibration of the liquid level outside a sealed metal container. PMID:28106857

  13. Propagation of acoustic waves in a one-dimensional macroscopically inhomogeneous poroelastic material.

    PubMed

    Gautier, G; Kelders, L; Groby, J P; Dazel, O; De Ryck, L; Leclaire, P

    2011-09-01

    Wave propagation in macroscopically inhomogeneous porous materials has received much attention in recent years. The wave equation, derived from the alternative formulation of Biot's theory of 1962, was reduced and solved recently in the case of rigid frame inhomogeneous porous materials. This paper focuses on the solution of the full wave equation in which the acoustic and the elastic properties of the poroelastic material vary in one-dimension. The reflection coefficient of a one-dimensional macroscopically inhomogeneous porous material on a rigid backing is obtained numerically using the state vector (or the so-called Stroh) formalism and Peano series. This coefficient can then be used to straightforwardly calculate the scattered field. To validate the method of resolution, results obtained by the present method are compared to those calculated by the classical transfer matrix method at both normal and oblique incidence and to experimental measurements at normal incidence for a known two-layers porous material, considered as a single inhomogeneous layer. Finally, discussion about the absorption coefficient for various inhomogeneity profiles gives further perspectives. © 2011 Acoustical Society of America

  14. A novel heterogeneous training sample selection method on space-time adaptive processing

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  15. Invalid-point removal based on epipolar constraint in the structured-light method

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  16. Implementation and Development of the Incremental Hole Drilling Method for the Measurement of Residual Stress in Thermal Spray Coatings

    NASA Astrophysics Data System (ADS)

    Valente, T.; Bartuli, C.; Sebastiani, M.; Loreto, A.

    2005-12-01

    The experimental measurement of residual stresses originating within thick coatings deposited by thermal spray on solid substrates plays a role of fundamental relevance in the preliminary stages of coating design and process parameters optimization. The hole-drilling method is a versatile and widely used technique for the experimental determination of residual stress in the most superficial layers of a solid body. The consolidated procedure, however, can only be implemented for metallic bulk materials or for homogeneous, linear elastic, and isotropic materials. The main objective of the present investigation was to adapt the experimental method to the measurement of stress fields built up in ceramic coatings/metallic bonding layers structures manufactured by plasma spray deposition. A finite element calculation procedure was implemented to identify the calibration coefficients necessary to take into account the elastic modulus discontinuities that characterize the layered structure through its thickness. Experimental adjustments were then proposed to overcome problems related to the low thermal conductivity of the coatings. The number of calculation steps and experimental drilling steps were finally optimized.

  17. R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope

    PubMed Central

    2017-01-01

    Rapid automatic detection of the fiducial points—namely, the P wave, QRS complex, and T wave—is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs. PMID:29065613

  18. R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope.

    PubMed

    Park, Jeong-Seon; Lee, Sang-Woong; Park, Unsang

    2017-01-01

    Rapid automatic detection of the fiducial points-namely, the P wave, QRS complex, and T wave-is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs.

  19. An examination of the concept of driving point receptance

    NASA Astrophysics Data System (ADS)

    Sheng, X.; He, Y.; Zhong, T.

    2018-04-01

    In the field of vibration, driving point receptance is a well-established and widely applied concept. However, as demonstrated in this paper, when a driving point receptance is calculated using the finite element (FE) method with solid elements, it does not converge as the FE mesh becomes finer, suggesting that there is a singularity. Hence, the concept of driving point receptance deserves a rigorous examination. In this paper, it is firstly shown that, for a point harmonic force applied on the surface of an elastic half-space, the Boussinesq formula can be applied to calculate the displacement amplitude of the surface if the response point is sufficiently close to the load. Secondly, by applying the Betti reciprocal theorem, it is shown that the displacement of an elastic body near a point harmonic force can be decomposed into two parts, with the first one being the displacement of an elastic half-space. This decomposition is useful, since it provides a solid basis for the introduction of a contact spring between a wheel and a rail in interaction. However, according to the Boussinesq formula, this decomposition also leads to the conclusion that a driving point receptance is infinite (singular), and would be undefinable. Nevertheless, driving point receptances have been calculated using different methods. Since the singularity identified in this paper was not appreciated, no account was given to the singularity in these calculations. Thus, the validity of these calculation methods must be examined. This constructs the third part of the paper. As the final development of the paper, the above decomposition is utilised to define and determine driving point receptances required for dealing with wheel/rail interactions.

  20. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C; Huet, C; Barraux, V

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less

Top