Vogl, Matthias
2014-04-01
The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
De Meij, A.; Vinuesa, J.-F.; Maupas, V.
2018-05-01
The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Proxy-SU(3) symmetry in heavy deformed nuclei
NASA Astrophysics Data System (ADS)
Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.
2017-06-01
Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-01-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250
Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
NASA Astrophysics Data System (ADS)
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.
NASA Astrophysics Data System (ADS)
Kamata, S.
2017-12-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Dipayan, E-mail: datta.dipayan@gmail.com; Gauss, Jürgen, E-mail: gauss@uni-mainz.de
We report analytical calculations of isotropic hyperfine-coupling constants in radicals using a spin-adapted open-shell coupled-cluster theory, namely, the unitary group based combinatoric open-shell coupled-cluster (COSCC) approach within the singles and doubles approximation. A scheme for the evaluation of the one-particle spin-density matrix required in these calculations is outlined within the spin-free formulation of the COSCC approach. In this scheme, the one-particle spin-density matrix for an open-shell state with spin S and M{sub S} = + S is expressed in terms of the one- and two-particle spin-free (charge) density matrices obtained from the Lagrangian formulation that is used for calculating themore » analytic first derivatives of the energy. Benchmark calculations are presented for NO, NCO, CH{sub 2}CN, and two conjugated π-radicals, viz., allyl and 1-pyrrolyl in order to demonstrate the performance of the proposed scheme.« less
Calculations of Hubbard U from first-principles
NASA Astrophysics Data System (ADS)
Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.
2006-09-01
The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.
Qualification of APOLLO2 BWR calculation scheme on the BASALA mock-up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaglio-Gaudard, C.; Santamarina, A.; Sargeni, A.
2006-07-01
A new neutronic APOLLO2/MOC/SHEM/CEA2005 calculation scheme for BWR applications has been developed by the French 'Commissariat a l'Energie Atomique'. This scheme is based on the latest calculation methodology (accurate mutual and self-shielding formalism, MOC treatment of the transport equation) and the recent JEFF3.1 nuclear data library. This paper presents the experimental validation of this new calculation scheme on the BASALA BWR mock-up The BASALA programme is devoted to the measurements of the physical parameters of high moderation 100% MOX BWR cores, in hot and cold conditions. The experimental validation of the calculation scheme deals with core reactivity, fission rate maps,more » reactivity worth of void and absorbers (cruciform control blades and Gd pins), as well as temperature coefficient. Results of the analysis using APOLLO2/MOC/SHEM/CEA2005 show an overestimation of the core reactivity by 600 pcm for BASALA-Hot and 750 pcm for BASALA-Cold. Reactivity worth of gadolinium poison pins and hafnium or B{sub 4}C control blades are predicted by APOLLO2 calculation within 2% accuracy. Furthermore, the radial power map is well predicted for every core configuration, including Void configuration and Hf / B{sub 4}C configurations: fission rates in the central assembly are calculated within the {+-}2% experimental uncertainty for the reference cores. The C/E bias on the isothermal Moderator Temperature Coefficient, using the CEA2005 library based on JEFF3.1 file, amounts to -1.7{+-}03 pcm/ deg. C on the range 10 deg. C-80 deg. C. (authors)« less
2012-01-01
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the “one hospital” approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the “one hospital” model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital’s cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level. PMID:22935314
Vogl, Matthias
2012-08-30
The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the "one hospital" approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the "one hospital" model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital's cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Icing Branch Current Research Activities in Icing Physics
NASA Technical Reports Server (NTRS)
Vargas, Mario
2009-01-01
Current development: A grid block transformation scheme which allows the input of grids in arbitrary reference frames, the use of mirror planes, and grids with relative velocities has been developed. A simple ice crystal and sand particle bouncing scheme has been included. Added an SLD splashing model based on that developed by William Wright for the LEWICE 3.2.2 software. A new area based collection efficiency algorithm will be incorporated which calculates trajectories from inflow block boundaries to outflow block boundaries. This method will be used for calculating and passing collection efficiency data between blade rows for turbo-machinery calculations.
NASA Astrophysics Data System (ADS)
Lee, Sheng-Jui; Chen, Hung-Cheng; You, Zhi-Qiang; Liu, Kuan-Lin; Chow, Tahsin J.; Chen, I.-Chia; Hsu, Chao-Ping
2010-10-01
We calculate the electron transfer (ET) rates for a series of heptacyclo[6.6.0.02,6.03,13.014,11.05,9.010,14]-tetradecane (HCTD) linked donor-acceptor molecules. The electronic coupling factor was calculated by the fragment charge difference (FCD) [19] and the generalized Mulliken-Hush (GMH) schemes [20]. We found that the FCD is less prone to problems commonly seen in the GMH scheme, especially when the coupling values are small. For a 3-state case where the charge transfer (CT) state is coupled with two different locally excited (LE) states, we tested with the 3-state approach for the GMH scheme [30], and found that it works well with the FCD scheme. A simplified direct diagonalization based on Rust's 3-state scheme was also proposed and tested. This simplified scheme does not require a manual assignment of the states, and it yields coupling values that are largely similar to those from the full Rust's approach. The overall electron transfer (ET) coupling rates were also calculated.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
NASA Technical Reports Server (NTRS)
Lee, H.-W.; Lam, K. S.; Devries, P. L.; George, T. F.
1980-01-01
A new semiclassical decoupling scheme (the trajectory-based decoupling scheme) is introduced in a computational study of vibrational-to-electronic energy transfer for a simple model system that simulates collinear atom-diatom collisions. The probability of energy transfer (P) is calculated quasiclassically using the new scheme as well as quantum mechanically as a function of the atomic electronic-energy separation (lambda), with overall good agreement between the two sets of results. Classical mechanics with the new decoupling scheme is found to be capable of predicting resonance behavior whereas an earlier decoupling scheme (the coordinate-based decoupling scheme) failed. Interference effects are not exhibited in P vs lambda results.
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
A New Quantum Watermarking Based on Quantum Wavelet Transforms
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Rasoul Pourarian, Mohammad; Farouk, Ahmed
2017-06-01
Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, Iran
Plume trajectory formation under stack tip self-enveloping
NASA Astrophysics Data System (ADS)
Gribkov, A. M.; Zroichikov, N. A.; Prokhorov, V. B.
2017-10-01
The phenomenon of stack tip self-enveloping and its influence upon the conditions of plume formation and on the trajectory of its motion are considered. Processes are described occurring in the initial part of the plume while the interaction between vertically directed flue gases outflowing from the stack and a horizontally directed moving air flow at high wind velocities that lead to the formation of a flag-like plume. Conditions responsible for the origin and evolution of interaction between these flows are demonstrated. For the first time, a plume formed under these conditions without bifurcation is registered. A photo image thereof is presented. A scheme for the calculation of the motion of a plume trajectory is proposed, the quantitative characteristics of which are obtained based on field observations. The wind velocity and direction, air temperature, and atmospheric turbulence at the level of the initial part of the trajectory have been obtained based on data obtained from an automatic meteorological system (mounted on the outer parts of a 250 m high stack no. 1 at the Naberezhnye Chelny TEPP plant) as well as based on the results of photographing and theodolite sighting of smoke puffs' trajectory taking into account their velocity within its initial part. The calculation scheme is supplemented with a new acting force—the force of self-enveloping. Based on the comparison of the new calculation scheme with the previous one, a significant contribution of this force to the development of the trajectory is revealed. A comparison of the natural full-scale data with the results of the calculation according to the proposed new scheme is made. The proposed calculation scheme has allowed us to extend the application of the existing technique to the range of high wind velocities. This approach would make it possible to simulate and investigate the trajectory and full rising height of the calculated the length above the mouth of flue-pipes, depending on various modal and meteorological parameters under the interrelation between the dynamic and thermal components of the rise as well as to obtain a universal calculation expression for determining the height of the plume rise for different classes of atmospheric stability.
A fast numerical scheme for causal relativistic hydrodynamics with dissipation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro
2011-08-01
Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less
Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth
2015-10-13
We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.
Rotational stellar structures based on the Lagrangian variational principle
NASA Astrophysics Data System (ADS)
Yasutake, Nobutoshi; Fujisawa, Kotaro; Yamada, Shoichi
2017-06-01
A new method for multi-dimensional stellar structures is proposed in this study. As for stellar evolution calculations, the Heney method is the defacto standard now, but basically assumed to be spherical symmetric. It is one of the difficulties for deformed stellar-evolution calculations to trace the potentially complex movements of each fluid element. On the other hand, our new method is very suitable to follow such movements, since it is based on the Lagrange coordinate. This scheme is also based on the variational principle, which is adopted to the studies for the pasta structures inside of neutron stars. Our scheme could be a major break through for evolution calculations of any types of deformed stars: proto-planets, proto-stars, and proto-neutron stars, etc.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
NASA Astrophysics Data System (ADS)
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Townsend, R. H. D.
2009-04-01
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.
Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk
2015-08-01
The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Using time-dependent density functional theory in real time for calculating electronic transport
NASA Astrophysics Data System (ADS)
Schaffhauser, Philipp; Kümmel, Stephan
2016-01-01
We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.
Calculating lattice thermal conductivity: a synopsis
NASA Astrophysics Data System (ADS)
Fugallo, Giorgia; Colombo, Luciano
2018-04-01
We provide a tutorial introduction to the modern theoretical and computational schemes available to calculate the lattice thermal conductivity in a crystalline dielectric material. While some important topics in thermal transport will not be covered (including thermal boundary resistance, electronic thermal conduction, and thermal rectification), we aim at: (i) framing the calculation of thermal conductivity within the general non-equilibrium thermodynamics theory of transport coefficients, (ii) presenting the microscopic theory of thermal conduction based on the phonon picture and the Boltzmann transport equation, and (iii) outlining the molecular dynamics schemes to calculate heat transport. A comparative and critical addressing of the merits and drawbacks of each approach will be discussed as well.
Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de
2016-02-07
We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
NASA Astrophysics Data System (ADS)
Shinya, A.; Ishihara, T.; Inoue, K.; Nozaki, K.; Kita, S.; Notomi, M.
2018-02-01
We propose an optical parallel adder based on a binary decision diagram that can calculate simply by propagating light through electrically controlled optical pass gates. The CARRY and CARRY operations are multiplexed in one circuit by a wavelength division multiplexing scheme to reduce the number of optical elements, and only a single gate constitutes the critical path for one digit calculation. The processing time reaches picoseconds per digit when we use a 100-μm-long optical path gates, which is ten times faster than a CMOS circuit.
NASA Astrophysics Data System (ADS)
Johnson, M. T.
2010-10-01
The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
Self-match based on polling scheme for passive optical network monitoring
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua
2018-06-01
We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.
NASA Astrophysics Data System (ADS)
Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng
2018-03-01
The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
An extrapolation scheme for solid-state NMR chemical shift calculations
NASA Astrophysics Data System (ADS)
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
NASA Astrophysics Data System (ADS)
Liu, Cheng-Ji; Li, Zhi-Hui; Bai, Chen-Ming; Si, Meng-Meng
2018-02-01
The concept of judgment space was proposed by Wang et al. (Phys. Rev. A 95, 022320, 2017), which was used to study some important properties of quantum entangled states based on local distinguishability. In this study, we construct 15 kinds of seven-qudit quantum entangled states in the sense of permutation, calculate their judgment space and propose a distinguishability rule to make the judgment space more clearly. Based on this rule, we study the local distinguishability of the 15 kinds of seven-qudit quantum entangled states and then propose a ( k, n) threshold quantum secret sharing scheme. Finally, we analyze the security of the scheme.
Optimization research of railway passenger transfer scheme based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Ni, Xiang
2018-05-01
The optimization research of railway passenger transfer scheme can provide strong support for railway passenger transport system, and its essence is path search. This paper realized the calculation of passenger transfer scheme for high speed railway when giving the time and stations of departure and arrival. The specific method that used were generating a passenger transfer service network of high-speed railway, establishing optimization model and searching by Ant Colony Algorithm. Finally, making analysis on the scheme from LanZhouxi to BeiJingXi which were based on high-speed railway network of China in 2017. The results showed that the transfer network and model had relatively high practical value and operation efficiency.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes
2016-06-01
The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.
Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger
2013-08-21
A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.
Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, K.; Sun, J.G.; Sha, W.T.
Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less
NASA Astrophysics Data System (ADS)
Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke
2014-05-01
Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.
Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.
2016-06-01
It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.
Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map
2014-01-01
We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970
NASA Astrophysics Data System (ADS)
Yu, Yuting; Cheng, Ming
2018-05-01
Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.
Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems
NASA Technical Reports Server (NTRS)
Morris, Chris
2013-01-01
Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.
a Thtee-Dimensional Variational Assimilation Scheme for Satellite Aod
NASA Astrophysics Data System (ADS)
Liang, Y.; Zang, Z.; You, W.
2018-04-01
A three-dimensional variational data assimilation scheme is designed for satellite AOD based on the IMPROVE (Interagency Monitoring of Protected Visual Environments) equation. The observation operator that simulates AOD from the control variables is established by the IMPROVE equation. All of the 16 control variables in the assimilation scheme are the mass concentrations of aerosol species from the Model for Simulation Aerosol Interactions and Chemistry scheme, so as to take advantage of this scheme in providing comprehensive analyses of species concentrations and size distributions as well as be calculating efficiently. The assimilation scheme can save computational resources as the IMPROVE equation is a quadratic equation. A single-point observation experiment shows that the information from the single-point AOD is effectively spread horizontally and vertically.
ON THE USE OF SHOT NOISE FOR PHOTON COUNTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmuidzinas, Jonas, E-mail: jonas@caltech.edu
Lieu et al. have recently claimed that it is possible to substantially improve the sensitivity of radio-astronomical observations. In essence, their proposal is to make use of the intensity of the photon shot noise as a measure of the photon arrival rate. Lieu et al. provide a detailed quantum-mechanical calculation of a proposed measurement scheme that uses two detectors and conclude that this scheme avoids the sensitivity degradation that is associated with photon bunching. If correct, this result could have a profound impact on radio astronomy. Here I present a detailed analysis of the sensitivity attainable using shot-noise measurement schemesmore » that use either one or two detectors, and demonstrate that neither scheme can avoid the photon bunching penalty. I perform both semiclassical and fully quantum calculations of the sensitivity, obtaining consistent results, and provide a formal proof of the equivalence of these two approaches. These direct calculations are furthermore shown to be consistent with an indirect argument based on a correlation method that establishes an independent limit to the sensitivity of shot-noise measurement schemes. Furthermore, these calculations are directly applicable to the regime of interest identified by Lieu et al. Collectively, these results conclusively demonstrate that the photon-bunching sensitivity penalty applies to shot-noise measurement schemes just as it does to ordinary photon counting, in contradiction to the fundamental claim made by Lieu et al. The source of this contradiction is traced to a logical fallacy in their argument.« less
Improved finite difference schemes for transonic potential calculations
NASA Technical Reports Server (NTRS)
Hafez, M.; Osher, S.; Whitlow, W., Jr.
1984-01-01
Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.
Seino, Junji; Nakai, Hiromi
2012-06-28
An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.
NASA Astrophysics Data System (ADS)
Wu, Fenxiang; Xu, Yi; Yu, Linpeng; Yang, Xiaojun; Li, Wenkai; Lu, Jun; Leng, Yuxin
2016-11-01
Pulse front distortion (PFD) is mainly induced by the chromatic aberration in femtosecond high-peak power laser systems, and it can temporally distort the pulse in the focus and therefore decrease the peak intensity. A novel measurement scheme is proposed to directly measure the PFD of ultra-intensity ultra-short laser pulses, which can work not only without any extra struggle for the desired reference pulse, but also largely reduce the size of the required optical elements in measurement. The measured PFD in an experimental 200TW/27fs laser system is in good agreement with the calculated result, which demonstrates the validity and feasibility of this method effectively. In addition, a simple compensation scheme based on the combination of concave lens and parabolic lens is also designed and proposed to correct the PFD. Based on the theoretical calculation, the PFD of above experimental laser system can almost be completely corrected by using this compensator with proper parameters.
Trajectory data privacy protection based on differential privacy mechanism
NASA Astrophysics Data System (ADS)
Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong
2018-05-01
In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Havu, V.; Fritz Haber Institute of the Max Planck Society, Berlin; Blum, V.
2009-12-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as themore » more rigorous bottom-up approaches.« less
Seino, Junji; Nakai, Hiromi
2012-10-14
The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Sopov, V. I.
2017-10-01
In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.
Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques
NASA Astrophysics Data System (ADS)
Canbakan, Axel
Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.
NASA Astrophysics Data System (ADS)
Kamata, Shunichi
2018-01-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection, for a bottom-heated convective layer. Adopting this new definition of l, I investigate the thermal evolution of Saturnian icy satellites, Dione and Enceladus, under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a thick global subsurface ocean suggested from geophysical analyses. Dynamical tides may be able to account for such an amount of heat, though the reference viscosity of Dione's ice and the ammonia content of Dione's ocean need to be very high. Otherwise, a thick global ocean in Dione cannot be maintained, implying that its shell is not in a minimum stress state.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
Conservative and bounded volume-of-fluid advection on unstructured grids
NASA Astrophysics Data System (ADS)
Ivey, Christopher B.; Moin, Parviz
2017-12-01
This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.
Assessment of numerical techniques for unsteady flow calculations
NASA Technical Reports Server (NTRS)
Hsieh, Kwang-Chung
1989-01-01
The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.
NASA Astrophysics Data System (ADS)
Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu
2015-06-01
We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.
Nagy-Soper subtraction scheme for multiparton final states
NASA Astrophysics Data System (ADS)
Chung, Cheng-Han; Robens, Tania
2013-04-01
In this work, we present the extension of an alternative subtraction scheme for next-to-leading order QCD calculations to the case of an arbitrary number of massless final state partons. The scheme is based on the splitting kernels of an improved parton shower and comes with a reduced number of final state momentum mappings. While a previous publication including the setup of the scheme has been restricted to cases with maximally two massless partons in the final state, we here provide the final state real emission and integrated subtraction terms for processes with any number of massless partons. We apply our scheme to three jet production at lepton colliders at next-to-leading order and present results for the differential C parameter distribution.
NASA Technical Reports Server (NTRS)
Myhill, Elizabeth A.; Boss, Alan P.
1993-01-01
In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y; Li, Y; Tian, Z
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less
Méndez-López, María Elena; García-Frapolli, Eduardo; Pritchard, Diana J; Sánchez González, María Consuelo; Ruiz-Mallén, Isabel; Porter-Bolland, Luciana; Reyes-Garcia, Victoria
2014-12-01
In Mexico, biodiversity conservation is primarily implemented through three schemes: 1) protected areas, 2) payment-based schemes for environmental services, and 3) community-based conservation, officially recognized in some cases as Indigenous and Community Conserved Areas. In this paper we compare levels of local participation across conservation schemes. Through a survey applied to 670 households across six communities in Southeast Mexico, we document local participation during the creation, design, and implementation of the management plan of different conservation schemes. To analyze the data, we first calculated the frequency of participation at the three different stages mentioned, then created a participation index that characterizes the presence and relative intensity of local participation for each conservation scheme. Results showed that there is a low level of local participation across all the conservation schemes explored in this study. Nonetheless, the payment for environmental services had the highest local participation while the protected areas had the least. Our findings suggest that local participation in biodiversity conservation schemes is not a predictable outcome of a specific (community-based) model, thus implying that other factors might be important in determining local participation. This has implications on future strategies that seek to encourage local involvement in conservation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356
NASA Astrophysics Data System (ADS)
Cai, Kaicong; Zheng, Xuan; Du, Fenfen
2017-08-01
The spectroscopy of amide-I vibrations has been widely utilized for the understanding of dynamical structure of polypeptides. For the modeling of amide-I spectra, two frequency maps were built for β-peptide analogue (N-ethylpropionamide, NEPA) in a number of solvents within different schemes (molecular mechanics force field based, GM map; DFT calculation based, GD map), respectively. The electrostatic potentials on the amide unit that originated from solvents and peptide backbone were correlated to the amide-I frequency shift from gas phase to solution phase during map parameterization. GM map is easier to construct with negligible computational cost since the frequency calculations for the samples are purely based on force field, while GD map utilizes sophisticated DFT calculations on the representative solute-solvent clusters and brings insight into the electronic structures of solvated NEPA and its chemical environments. The results show that the maps' predicted amide-I frequencies present solvation environmental sensitivities and exhibit their specific characters with respect to the map protocols, and the obtained vibrational parameters are in satisfactory agreement with experimental amide-I spectra of NEPA in solution phase. Although different theoretical schemes based maps have their advantages and disadvantages, the present maps show their potentials in interpreting the amide-I spectra for β-peptides, respectively.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Calculating the costs of work-based training: the case of NHS Cadet Schemes.
Norman, Ian; Normand, Charles; Watson, Roger; Draper, Jan; Jowett, Sandra; Coster, Samantha
2008-09-01
The worldwide shortage of registered nurses [Buchan, J., Calman, L., 2004. The Global Shortage of Registered Nurses: An Overview of Issues And Actions. International Council of Nurses, Geneva] points to the need for initiatives which increase access to the profession, in particular, to those sections of the population who traditionally do not enter nursing. This paper reports findings on the costs associated with one such initiative, the British National Health Service (NHS) Cadet Scheme, designed to provide a mechanism for entry into nurse training for young people without conventional academic qualifications. The paper illustrates an approach to costing work-based learning interventions which offsets the value attributed to trainees' work against their training costs. To provide a preliminary evaluation of the cost of the NHS Cadet Scheme initiative. Questionnaire survey of the leaders of all cadet schemes in England (n=62, 100% response) in December 2002 to collect financial information and data on progression of cadets through the scheme, and a follow-up questionnaire survey of the same scheme leaders to improve the quality of information, which was completed in January 2004 (n=56, 59% response). The mean cost of producing a cadet to progress successfully through the scheme and onto a pre-registration nursing programme depends substantially on the value of their contribution to healthcare work during training and the progression rate of students through the scheme. The findings from this evaluation suggest that these factors varied very widely across the 62 schemes. Established schemes have, on average, lower attrition and higher progression rates than more recently established schemes. Using these rates, we estimate that on maturity, a cadet scheme will progress approximately 60% of students into pre-registration nurse training. As comparative information was not available from similar initiatives that provide access to nurse training, it was not possible to calculate the cost effectiveness of NHS Cadet Schemes. However, this study does show that those cadet schemes which have the potential to offer better value for money, are those where the progression rates are good and where the practical training of cadets is organised such that cadets meet the needs of patients which might otherwise have to be met by non-professionally qualified staff.
NASA Astrophysics Data System (ADS)
Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2010-10-01
Mukherjee-type (Mk) state specific (SS) multi-reference (MR) coupled-cluster (CC) calculations of 1,n-didehydropolyene diradicals were carried out to elucidate singlet-triplet energy gaps via through-bond coupling between terminal radicals. Spin-unrestricted Hartree-Fock (UHF) based coupled-cluster (CC) computations of these diradicals were also performed. Comparison between symmetry-adapted MkMRCC and broken-symmetry (BS) UHF-CC computational results indicated that spin-contamination error of UHF-CC solutions was left at the SD level, although it had been thought that this error was negligible for the CC scheme in general. In order to eliminate the spin contamination error, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed eliminated the error to yield good agreement with MRCC in energy. The CCD with spin-unrestricted Brueckner's orbital (UB) was also employed for these polyene diradicals, showing that large spin-contamination errors at UHF solutions are dramatically improved, and therefore AP scheme for UBD removed easily the rest of spin-contaminations. Pure- and hybrid-density functional theory (DFT) calculations of the species were also performed. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid DFT. The AP DFT calculations yielded the singlet-triplet energy gaps that were in good agreement with those of MRCC, AP UHF-CC and AP UB-CC. Chemical indices such as the diradical character were calculated with all these methods. Implications of the present computational results are discussed in relation to previous RMRCC calculations of diradical species and BS calculations of large exchange coupled systems.
NASA Astrophysics Data System (ADS)
Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi
2017-07-01
This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.
Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng
2016-01-01
In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network. PMID:27754405
Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng
2016-10-14
In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
Multigrid calculation of three-dimensional turbomachinery flows
NASA Technical Reports Server (NTRS)
Caughey, David A.
1989-01-01
Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Chuanghong
2018-02-01
As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.
NASA Astrophysics Data System (ADS)
Mueller, R. W.; Beyer, H. G.; Cros, S.; Dagestad, K. F.; Dumortier, D.; Ineichen, P.; Hammer, A.; Heinemann, D.; Kuhlemann, R.; Olseth, J. A.; Piernavieja, G.; Reise, C.; Schroedter, M.; Skartveit, A.; Wald, L.
1-University of Oldenburg, 2-University of Appl. Sciences Magdeburg, 3-Ecole des Mines de Paris, 4-University of Bergen, 5-Ecole Nationale des Travaux Publics de l'Etat, 6-University of Geneva, 7-Instituto Tecnologico de Canarias, 8-Fraunhofer Institute for Solar Energy Systems, 9-German Aerospace Center Geostationary satellites such as Meteosat provide cloud information with a high spatial and temporal resolution. Such satellites are therefore not only useful for weather fore- casting, but also for the estimation of solar irradiance since the knowledge of the light reflected by clouds is the basis for the calculation of the transmitted light. Additionally an the knowledge of atmospheric parameters involved in scattering and absorption of the sunlight is necessary for an accurate calculation of the solar irradiance. An accurate estimation of the downward solar irradiance is not only of particular im- portance for the assessment of the radiative forcing of the climate system, but also necessary for an efficient planning and operation of solar energy systems. Currently, most of the operational calculation schemes for solar irradiance are semi- empirical. They use cloud information from the current Meteosat satellite and clima- tologies of atmospheric parameters e.g. turbidity (aerosols and water vapor). The Me- teosat Second Generation satellites (MSG, to be launched in 2002) will provide not only a higher spatial and temporal resolution, but also the potential for the retrieval of atmospheric parameters such as ozone, water vapor and with restrictions aerosols. With this more detailed knowledge about atmospheric parameters it is evident to set up a new calculation scheme based on radiative transfer models using the retrieved atmospheric parameters as input. Unfortunately the possibility of deriving aerosol in- formation from MSG data is limited. As a cosequence the use of data from additional satellite instruments ( e.g. GOME/ATSR-2) is neeeded. Within this presentation a new type of the solar irradiance calculation scheme is de- scribed. It is based on the integrated use of a radiative transfer model (RTM), whereas the information of the atmospheric parameters retrieved from satellites (MSG and GOME/ATSR-2) will be used as input for the RTM. First comparisons between calcu- lated and measured solar irradiance are presented. The improvements linked with the usage of the new calculation scheme are discussed, taking into account the benefits and limitations of the new method and the MSG satellite.
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Lin, Hui; Zhang, Qi
2018-01-01
The reference source system is a key factor to ensure the successful location of the satellite interference source. Currently, the traditional system used a mechanical rotating antenna which leaded to the disadvantages of slow rotation and high failure-rate, which seriously restricted the system’s positioning-timeliness and became its obvious weaknesses. In this paper, a multi-beam antenna scheme based on the horn array was proposed as a reference source for the satellite interference location, which was used as an alternative to the traditional reference source antenna. The new scheme has designed a small circularly polarized horn antenna as an element and proposed a multi-beamforming algorithm based on planar array. Moreover, the simulation analysis of horn antenna pattern, multi-beam forming algorithm and simulated satellite link cross-ambiguity calculation have been carried out respectively. Finally, cross-ambiguity calculation of the traditional reference source system has also been tested. The comparison between the results of computer simulation and the actual test results shows that the scheme is scientific and feasible, obviously superior to the traditional reference source system.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
AVQS: attack route-based vulnerability quantification scheme for smart grid.
Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik
2014-01-01
A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
NASA Astrophysics Data System (ADS)
Seino, Junji; Kageyama, Ryo; Fujinami, Mikito; Ikabata, Yasuhiro; Nakai, Hiromi
2018-06-01
A semi-local kinetic energy density functional (KEDF) was constructed based on machine learning (ML). The present scheme adopts electron densities and their gradients up to third-order as the explanatory variables for ML and the Kohn-Sham (KS) kinetic energy density as the response variable in atoms and molecules. Numerical assessments of the present scheme were performed in atomic and molecular systems, including first- and second-period elements. The results of 37 conventional KEDFs with explicit formulae were also compared with those of the ML KEDF with an implicit formula. The inclusion of the higher order gradients reduces the deviation of the total kinetic energies from the KS calculations in a stepwise manner. Furthermore, our scheme with the third-order gradient resulted in the closest kinetic energies to the KS calculations out of the presented functionals.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
NASA Astrophysics Data System (ADS)
Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin
2018-02-01
This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.
Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim
2015-01-01
In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of “bad” nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics—maliciousness, cooperation, and compatibility—and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates “bad”, “misbehaving” or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated “bad” behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to “good” nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations. PMID:25648712
Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim
2015-02-02
In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of "bad" nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics-maliciousness, cooperation, and compatibility-and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates "bad", "misbehaving" or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated "bad" behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to "good" nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations.
Calculations of 3D compressible flows using an efficient low diffusion upwind scheme
NASA Astrophysics Data System (ADS)
Hu, Zongjun; Zha, Gecheng
2005-01-01
A newly suggested E-CUSP upwind scheme is employed for the first time to calculate 3D flows of propulsion systems. The E-CUSP scheme contains the total energy in the convective vector and is fully consistent with the characteristic directions. The scheme is proved to have low diffusion and high CPU efficiency. The computed cases in this paper include a transonic nozzle with circular-to-rectangular cross-section, a transonic duct with shock wave/turbulent boundary layer interaction, and a subsonic 3D compressor cascade. The computed results agree well with the experiments. The new scheme is proved to be accurate, efficient and robust for the 3D calculations of the flows in this paper.
Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1992-01-01
Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
NASA Astrophysics Data System (ADS)
Agrawal, Anuj; Bhatia, Vimal; Prakash, Shashi
2018-01-01
Efficient utilization of spectrum is a key concern in the soon to be deployed elastic optical networks (EONs). To perform routing in EONs, various fixed routing (FR), and fixed-alternate routing (FAR) schemes are ubiquitously used. FR, and FAR schemes calculate a fixed route, and a prioritized list of a number of alternate routes, respectively, between different pairs of origin o and target t nodes in the network. The route calculation performed using FR and FAR schemes is predominantly based on either the physical distance, known as k -shortest paths (KSP), or on the hop count (HC). For survivable optical networks, FAR usually calculates link-disjoint (LD) paths. These conventional routing schemes have been efficiently used for decades in communication networks. However, in this paper, it has been demonstrated that these commonly used routing schemes cannot utilize the network spectral resources optimally in the newly introduced EONs. Thus, we propose a new routing scheme for EON, namely, k -distance adaptive paths (KDAP) that efficiently utilizes the benefit of distance-adaptive modulation, and bit rate-adaptive superchannel capability inherited by EON to improve spectrum utilization. In the proposed KDAP, routes are found and prioritized on the basis of bit rate, distance, spectrum granularity, and the number of links used for a particular route. To evaluate the performance of KSP, HC, LD, and the proposed KDAP, simulations have been performed for three different sized networks, namely, 7-node test network (TEST7), NSFNET, and 24-node US backbone network (UBN24). We comprehensively assess the performance of various conventional, and the proposed routing schemes by solving both the RSA and the dual RSA problems under homogeneous and heterogeneous traffic requirements. Simulation results demonstrate that there is a variation amongst the performance of KSP, HC, and LD, depending on the o - t pair, and the network topology and its connectivity. However, the proposed KDAP always performs better for all the considered networks and traffic scenarios, as compared to the conventional routing schemes, namely, KSP, HC, and LD. The proposed KDAP achieves up to 60 % , and 10.46 % improvement in terms of spectrum utilization, and resource utilization ratio, respectively, over the conventional routing schemes.
Dahlgren, Björn; Reif, Maria M; Hünenberger, Philippe H; Hansen, Niels
2012-10-09
The raw ionic solvation free energies calculated on the basis of atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [Kastenholz, M. A.; Hünenberger, P. H. J. Chem. Phys.2006, 124, 224501 and Reif, M. M.; Hünenberger, P. H. J. Chem. Phys.2011, 134, 144104], the application of an appropriate correction scheme allows for a conversion of the methodology-dependent raw data into methodology-independent results. In this work, methodology-independent derivative thermodynamic hydration and aqueous partial molar properties are calculated for the Na(+) and Cl(-) ions at P° = 1 bar and T(-) = 298.15 K, based on the SPC water model and on ion-solvent Lennard-Jones interaction coefficients previously reoptimized against experimental hydration free energies. The hydration parameters considered are the hydration free energy and enthalpy. The aqueous partial molar parameters considered are the partial molar entropy, volume, heat capacity, volume-compressibility, and volume-expansivity. Two alternative calculation methods are employed to access these properties. Method I relies on the difference in average volume and energy between two aqueous systems involving the same number of water molecules, either in the absence or in the presence of the ion, along with variations of these differences corresponding to finite pressure or/and temperature changes. Method II relies on the calculation of the hydration free energy of the ion, along with variations of this free energy corresponding to finite pressure or/and temperature changes. Both methods are used considering two distinct variants in the application of the correction scheme. In variant A, the raw values from the simulations are corrected after the application of finite difference in pressure or/and temperature, based on correction terms specifically designed for derivative parameters at P° and T(-). In variant B, these raw values are corrected prior to differentiation, based on corresponding correction terms appropriate for the different simulation pressures P and temperatures T. The results corresponding to the different calculation schemes show that, except for the hydration free energy itself, accurate methodological independence and quantitative agreement with even the most reliable experimental parameters (ion-pair properties) are not yet reached. Nevertheless, approximate internal consistency and qualitative agreement with experimental results can be achieved, but only when an appropriate correction scheme is applied, along with a careful consideration of standard-state issues. In this sense, the main merit of the present study is to set a clear framework for these types of calculations and to point toward directions for future improvements, with the ultimate goal of reaching a consistent and quantitative description of single-ion hydration thermodynamics in molecular dynamics simulations.
Cai, Kaicong; Zheng, Xuan; Du, Fenfen
2017-08-05
The spectroscopy of amide-I vibrations has been widely utilized for the understanding of dynamical structure of polypeptides. For the modeling of amide-I spectra, two frequency maps were built for β-peptide analogue (N-ethylpropionamide, NEPA) in a number of solvents within different schemes (molecular mechanics force field based, GM map; DFT calculation based, GD map), respectively. The electrostatic potentials on the amide unit that originated from solvents and peptide backbone were correlated to the amide-I frequency shift from gas phase to solution phase during map parameterization. GM map is easier to construct with negligible computational cost since the frequency calculations for the samples are purely based on force field, while GD map utilizes sophisticated DFT calculations on the representative solute-solvent clusters and brings insight into the electronic structures of solvated NEPA and its chemical environments. The results show that the maps' predicted amide-I frequencies present solvation environmental sensitivities and exhibit their specific characters with respect to the map protocols, and the obtained vibrational parameters are in satisfactory agreement with experimental amide-I spectra of NEPA in solution phase. Although different theoretical schemes based maps have their advantages and disadvantages, the present maps show their potentials in interpreting the amide-I spectra for β-peptides, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
Verification of kinetic schemes of hydrogen ignition and combustion in air
NASA Astrophysics Data System (ADS)
Fedorov, A. V.; Fedorova, N. N.; Vankova, O. S.; Tropin, D. A.
2018-03-01
Three chemical kinetic models for hydrogen combustion in oxygen and three gas-dynamic models for reactive mixture flow behind the initiating SW front were analyzed. The calculated results were compared with experimental data on the dependences of the ignition delay on the temperature and the dilution of the mixture with argon or nitrogen. Based on detailed kinetic mechanisms of nonequilibrium chemical transformations, a mathematical technique for describing the ignition and combustion of hydrogen in air was developed using the ANSYS Fluent code. The problem of ignition of a hydrogen jet fed coaxially into supersonic flow was solved numerically. The calculations were carried out using the Favre-averaged Navier-Stokes equations for a multi-species gas taking into account chemical reactions combined with the k-ω SST turbulence model. The problem was solved in several steps. In the first step, verification of the calculated and experimental data for the three kinetic schemes was performed without considering the conicity of the flow. In the second step, parametric calculations were performed to determine the influence of the conicity of the flow on the mixing and ignition of hydrogen in air using a kinetic scheme consisting of 38 reactions. Three conical supersonic nozzles for a Mach number M = 2 with different expansion angles β = 4°, 4.5°, and 5° were considered.
Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheleva, N.; Ivanov, P.; Todorova, G.
This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Trusted measurement model based on multitenant behaviors.
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme.
Trusted Measurement Model Based on Multitenant Behaviors
Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng
2014-01-01
With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme. PMID:24987731
Modelling of DNA-protein recognition
NASA Technical Reports Server (NTRS)
Rein, R.; Garduno, R.; Colombano, S.; Nir, S.; Haydock, K.; Macelroy, R. D.
1980-01-01
Computer model-building procedures using stereochemical principles together with theoretical energy calculations appear to be, at this stage, the most promising route toward the elucidation of DNA-protein binding schemes and recognition principles. A review of models and bonding principles is conducted and approaches to modeling are considered, taking into account possible di-hydrogen-bonding schemes between a peptide and a base (or a base pair) of a double-stranded nucleic acid in the major groove, aspects of computer graphic modeling, and a search for isogeometric helices. The energetics of recognition complexes is discussed and several models for peptide DNA recognition are presented.
NASA Astrophysics Data System (ADS)
Wen, Jing; Ma, Haibo
2017-07-01
For computing the intra-chain excitonic couplings in polymeric systems, here we propose a new fragmentation approach. A comparison for the energetic and spatial properties of the low-lying excited states in PPV between our scheme and full quantum chemical calculations, reveals that our scheme can nicely reproduce full quantum chemical results in weakly coupled systems. Further wavefunction analysis indicate that improved description for strongly coupled system can be achieved by the inclusion of the higher excited states within each fragments. Our proposed scheme is helpful for building the bridge linking the phenomenological descriptions of excitons and microscopic modeling for realistic polymers.
NASA Astrophysics Data System (ADS)
Hagiwara, Yohsuke; Ohta, Takehiro; Tateno, Masaru
2009-02-01
An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.
Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica
2009-03-01
Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
A fast iterative scheme for the linearized Boltzmann equation
NASA Astrophysics Data System (ADS)
Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.
2017-06-01
Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
NASA Astrophysics Data System (ADS)
Chepur, Petr; Tarasenko, Alexander; Gruchenkova, Alesya
2017-10-01
The paper has its focus on the problem of estimating the stress-strain state of the vertical steel tanks with the inadmissible geometric imperfections in the wall shape. In the paper, the authors refer to an actual tank to demonstrate that the use of certain design schemes can lead to the raw errors and, accordingly, to the unreliable results. Obviously, these design schemes cannot be based on when choosing the real repair technologies. For that reason, authors performed the calculations of the tank removed out of service for the repair, basing on the developed finite-element model of the VST-5000 tank with a conical roof. The proposed approach was developed for the analysis of the SSS (stress-strain state) of a tank having geometric imperfections of the wall shape. Based on the work results, the following was proposed: to amend the Annex A methodology “Method for calculating the stress-strain state of the tank wall during repair by lifting the tank and replacing the wall metal structures” by inserting the requirement to compulsory consider the actual stiffness of the VST entire structure and its roof when calculating the structure stress-strain state.
Benchmarking the Bethe–Salpeter Formalism on a Standard Organic Molecular Set
2015-01-01
We perform benchmark calculations of the Bethe–Salpeter vertical excitation energies for the set of 28 molecules constituting the well-known Thiel’s set, complemented by a series of small molecules representative of the dye chemistry field. We show that Bethe–Salpeter calculations based on a molecular orbital energy spectrum obtained with non-self-consistent G0W0 calculations starting from semilocal DFT functionals dramatically underestimate the transition energies. Starting from the popular PBE0 hybrid functional significantly improves the results even though this leads to an average −0.59 eV redshift compared to reference calculations for Thiel’s set. It is shown, however, that a simple self-consistent scheme at the GW level, with an update of the quasiparticle energies, not only leads to a much better agreement with reference values, but also significantly reduces the impact of the starting DFT functional. On average, the Bethe–Salpeter scheme based on self-consistent GW calculations comes close to the best time-dependent DFT calculations with the PBE0 functional with a 0.98 correlation coefficient and a 0.18 (0.25) eV mean absolute deviation compared to TD-PBE0 (theoretical best estimates) with a tendency to be red-shifted. We also observe that TD-DFT and the standard adiabatic Bethe–Salpeter implementation may differ significantly for states implying a large multiple excitation character. PMID:26207104
AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid
Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik
2014-01-01
A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923
Analytical scheme calculations of angular momentum coupling and recoupling coefficients
NASA Astrophysics Data System (ADS)
Deveikis, A.; Kuznecovas, A.
2007-03-01
We investigate the Scheme programming language opportunities to analytically calculate the Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients that are used in the quantum theory of angular momentum. The considered coefficients are calculated by a direct evaluation of the sum formulas. The calculation results for large values of quantum angular momenta were compared with analogous calculations with FORTRAN and Java programming languages.
Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.
NASA Astrophysics Data System (ADS)
Rath, V.; Wolf, A.; Bücker, H. M.
2006-10-01
Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.
NASA Astrophysics Data System (ADS)
Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon
2017-06-01
Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.
Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris
2017-12-15
Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.
Time dependent density functional calculation of plasmon response in clusters
NASA Astrophysics Data System (ADS)
Wang, Feng; Zhang, Feng-Shou; Eric, Suraud
2003-02-01
We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.
Analysis of BJ493 diesel engine lubrication system properties
NASA Astrophysics Data System (ADS)
Liu, F.
2017-12-01
The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-07
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.
Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana
2008-06-01
Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
A photonic transistor device based on photons and phonons in a cavity electromechanical system
NASA Astrophysics Data System (ADS)
Jiang, Cheng; Zhu, Ka-Di
2013-01-01
We present a scheme for photonic transistors based on photons and phonons in a cavity electromechanical system, which is composed of a superconducting microwave cavity coupled to a nanomechanical resonator. Control of the propagation of photons is achieved through the interaction of microwave field (photons) and nanomechanical vibrations (phonons). By calculating the transmission spectrum of the signal field, we show that the signal field can be efficiently attenuated or amplified, depending on the power of a second ‘gating’ (pump) field. This scheme may be a promising candidate for single-photon transistors and pave the way for numerous applications in telecommunication and quantum information technologies.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
Research on Signature Verification Method Based on Discrete Fréchet Distance
NASA Astrophysics Data System (ADS)
Fang, J. L.; Wu, W.
2018-05-01
This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Computational scheme for pH-dependent binding free energy calculation with explicit solvent.
Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R
2016-01-01
We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.
Investigation of combustion characteristics in a scramjet combustor using a modified flamelet model
NASA Astrophysics Data System (ADS)
Zhao, Guoyan; Sun, Mingbo; Wang, Hongbo; Ouyang, Hao
2018-07-01
In this study, the characteristics of supersonic combustion inside an ethylene-fueled scramjet combustor equipped with multi-cavities were investigated with different injection schemes. Experimental results showed that the flames concentrated in the cavity and separated boundary layer downstream of the cavity, and they occupied the flow channel further enhancing the bulk flow compression. The flame structure in distributed injection scheme differed from that in centralized injection scheme. In numerical simulations, a modified flamelet model was introduced to consider that the pressure distribution is far from homogenous inside the scramjet combustor. Compared with original flamelet model, numerical predictions based on the modified model showed better agreement with the experimental results, validating the reliability of the calculations. Based on the modified model, the simulations with different injection schemes were analysed. The predicted flame agreed reasonably with the experimental observations in structure. The CO masses were concentrated in cavity and subsonic region adjacent to the cavity shear layer leading to intense heat release. Compared with centralized scheme, the higher jet mixing efficiency in distributed scheme induced an intense combustion in posterior upper cavity and downstream of the cavity. From streamline and isolation surfaces, the combustion at trail of lower cavity was depressed since the bulk flow downstream of the cavity is pushed down.
Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan
2018-05-31
The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.
Implicit Total Variation Diminishing (TVD) schemes for steady-state calculations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Warming, R. F.; Harten, A.
1983-01-01
The application of a new implicit unconditionally stable high resolution total variation diminishing (TVD) scheme to steady state calculations. It is a member of a one parameter family of explicit and implicit second order accurate schemes developed by Harten for the computation of weak solutions of hyperbolic conservation laws. This scheme is guaranteed not to generate spurious oscillations for a nonlinear scalar equation and a constant coefficient system. Numerical experiments show that this scheme not only has a rapid convergence rate, but also generates a highly resolved approximation to the steady state solution. A detailed implementation of the implicit scheme for the one and two dimensional compressible inviscid equations of gas dynamics is presented. Some numerical computations of one and two dimensional fluid flows containing shocks demonstrate the efficiency and accuracy of this new scheme.
Predictions for partial and monolayer coverages of O2 on graphite
NASA Technical Reports Server (NTRS)
Pan, R. P.; Etters, R. D.; Kobashi, K.; Chandrasekharan, V.
1982-01-01
Monolayer properties of O2 on graphite are calculated using a pattern recognition, optimization scheme. Equilibrium monolayers are predicted at two different densities with properties in agreement with recent X-ray diffraction, specific heat, and neutron scattering data. Properties of the extremely low density regime are calculated using a model based upon a distribution of two-dimensional O2 clusters. The results are consistent with experimental evidence.
NASA Astrophysics Data System (ADS)
Morbec, Juliana M.; Kratzer, Peter
2017-01-01
Using first-principles calculations based on density-functional theory (DFT), we investigated the effects of the van der Waals (vdW) interactions on the structural and electronic properties of anthracene and pentacene adsorbed on the Ag(111) surface. We found that the inclusion of vdW corrections strongly affects the binding of both anthracene/Ag(111) and pentacene/Ag(111), yielding adsorption heights and energies more consistent with the experimental results than standard DFT calculations with generalized gradient approximation (GGA). For anthracene/Ag(111) the effect of the vdW interactions is even more dramatic: we found that "pure" DFT-GGA calculations (without including vdW corrections) result in preference for a tilted configuration, in contrast to the experimental observations of flat-lying adsorption; including vdW corrections, on the other hand, alters the binding geometry of anthracene/Ag(111), favoring the flat configuration. The electronic structure obtained using a self-consistent vdW scheme was found to be nearly indistinguishable from the conventional DFT electronic structure once the correct vdW geometry is employed for these physisorbed systems. Moreover, we show that a vdW correction scheme based on a hybrid functional DFT calculation (HSE) results in an improved description of the highest occupied molecular level of the adsorbed molecules.
NASA Astrophysics Data System (ADS)
Seth, Priyanka; Hansmann, Philipp; van Roekeghem, Ambroise; Vaugier, Loig; Biermann, Silke
2017-08-01
The determination of the effective Coulomb interactions to be used in low-energy Hamiltonians for materials with strong electronic correlations remains one of the bottlenecks for parameter-free electronic structure calculations. We propose and benchmark a scheme for determining the effective local Coulomb interactions for charge-transfer oxides and related compounds. Intershell interactions between electrons in the correlated shell and ligand orbitals are taken into account in an effective manner, leading to a reduction of the effective local interactions on the correlated shell. Our scheme resolves inconsistencies in the determination of effective interactions as obtained by standard methods for a wide range of materials, and allows for a conceptual understanding of the relation of cluster model and dynamical mean field-based electronic structure calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Shen; Kang, Wei, E-mail: weikang@pku.edu.cn; College of Engineering, Peking University, Beijing 100871
An extended first-principles molecular dynamics (FPMD) method based on Kohn-Sham scheme is proposed to elevate the temperature limit of the FPMD method in the calculation of dense plasmas. The extended method treats the wave functions of high energy electrons as plane waves analytically and thus expands the application of the FPMD method to the region of hot dense plasmas without suffering from the formidable computational costs. In addition, the extended method inherits the high accuracy of the Kohn-Sham scheme and keeps the information of electronic structures. This gives an edge to the extended method in the calculation of mixtures ofmore » plasmas composed of heterogeneous ions, high-Z dense plasmas, lowering of ionization potentials, X-ray absorption/emission spectra, and opacities, which are of particular interest to astrophysics, inertial confinement fusion engineering, and laboratory astrophysics.« less
Seth, Priyanka; Hansmann, Philipp; van Roekeghem, Ambroise; Vaugier, Loig; Biermann, Silke
2017-08-04
The determination of the effective Coulomb interactions to be used in low-energy Hamiltonians for materials with strong electronic correlations remains one of the bottlenecks for parameter-free electronic structure calculations. We propose and benchmark a scheme for determining the effective local Coulomb interactions for charge-transfer oxides and related compounds. Intershell interactions between electrons in the correlated shell and ligand orbitals are taken into account in an effective manner, leading to a reduction of the effective local interactions on the correlated shell. Our scheme resolves inconsistencies in the determination of effective interactions as obtained by standard methods for a wide range of materials, and allows for a conceptual understanding of the relation of cluster model and dynamical mean field-based electronic structure calculations.
Computational flow field in energy efficient engine (EEE)
NASA Astrophysics Data System (ADS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-11-01
In this paper, preliminary results for the recently-updated Open National Combustor Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the Eusing different ways to introduce the fuel injection. Supported by NASA's Transformational Tools and Technologies project.
Computational Flow Field in Energy Efficient Engine (EEE)
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, preliminary results for the recently-updated Open National Combustion Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the EEE using different ways to introduce the fuel injection.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui
2017-02-06
A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
NASA Astrophysics Data System (ADS)
Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos
2016-08-01
In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.
Flux Renormalization in Constant Power Burnup Calculations
Isotalo, Aarno E.; Aalto Univ., Otaniemi; Davidson, Gregory G.; ...
2016-06-15
To more accurately represent the desired power in a constant power burnup calculation, the depletion steps of the calculation can be divided into substeps and the neutron flux renormalized on each substep to match the desired power. Here, this paper explores how such renormalization should be performed, how large a difference it makes, and whether using renormalization affects results regarding the relative performance of different neutronics–depletion coupling schemes. When used with older coupling schemes, renormalization can provide a considerable improvement in overall accuracy. With previously published higher order coupling schemes, which are more accurate to begin with, renormalization has amore » much smaller effect. Finally, while renormalization narrows the differences in the accuracies of different coupling schemes, their order of accuracy is not affected.« less
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio
1995-01-01
The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.
Finite Difference Schemes as Algebraic Correspondences between Layers
NASA Astrophysics Data System (ADS)
Malykh, Mikhail; Sevastianov, Leonid
2018-02-01
For some differential equations, especially for Riccati equation, new finite difference schemes are suggested. These schemes define protective correspondences between the layers. Calculation using these schemes can be extended to the area beyond movable singularities of exact solution without any error accumulation.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
NASA Astrophysics Data System (ADS)
Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
Nag, Sudip; Kale, Nitin S; Rao, V; Sharma, Dinesh K
2009-01-01
Piezoresistive micro-cantilevers are interesting bio-sensing tool whose base resistance value (R) changes by a few parts per million (DeltaR) in deflected conditions. Measuring such a small deviation is always being a challenge due to noise. An advanced and reliable DeltaR/R measurement scheme is presented in this paper which can sense resistance changes down to 6 parts per million. The measurement scheme includes the half-bridge connected micro-cantilevers with mismatch compensation, precision op-amp based filters and amplifiers, and a lock-in amplifier based detector. The input actuating sine wave is applied from a function generator and the output dc voltage is displayed on a digital multimeter. The calibration is performed and instrument sensitivity is calculated. An experimental set-up using a probe station is discussed that demonstrates a combined performance of the measurement system and SU8-polysilicon cantilevers. The deflection sensitivity of such polymeric cantilevers is calculated. The system will be highly useful to detect bio-markers such as myoglobin and troponin that are released in blood during or after heart attacks.
Accuracy of a teleported trapped field state inside a single bimodal cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queiros, Iara P. de; Cardoso, W. B.; Souza, Simone
2007-09-15
We propose a simplified scheme to teleport a superposition of coherent states from one mode to another of the same bimodal lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity that can be achieved, demonstrating accurate teleportation if the mean photon number of each mode is at most 1.5. Our scheme applies as well for teleportation of coherent states from one mode of a cavity to another mode of a second cavity, when both cavities are embedded in a common reservoir.
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
Simplicial lattices in classical and quantum gravity: Mathematical structure and application
NASA Astrophysics Data System (ADS)
Lafave, Norman Joseph
1989-03-01
Geometrodynamics can be understood more clearly in the language of geometry than in the language of differential equations. This is the primary motivation for the development of calculational schemes based on Regge Calculus as an alternative to those schemes based on Ricci Calculus. The mathematics of simplicial lattices were developed to the same level of sophistication as the mathematics of pseudo--Riemannian geometry for continuum manifolds. This involves the definition of the simplicial analogues of several concepts from differential topology and differential geometry-the concept of a point, tangent spaces, forms, tensors, parallel transport, covariant derivatives, connections, and curvature. These simplicial analogues are used to define the Einstein tensor and the extrinsic curvature on a simplicial geometry. This mathematical formalism was applied to the solution of several outstanding problems in the development of a Regge Calculus based computational scheme for general geometrodynamic problems. This scheme is based on a 3 + 1 splitting of spacetime within the Regge Calculus prescription known as Null-Strut Calculus (NSC). NSC describes the foliation of spacetime into spacelike hypersurfaces built of tetrahedra. These hypersurfaces are coupled by light rays (null struts) to past and future momentum-like structures, geometrically dual to the tetrahedral lattice of the hypersurface. Avenues of investigation for NSC in quantum gravity are described.
Non-equilibrium radiation from viscous chemically reacting two-phase exhaust plumes
NASA Technical Reports Server (NTRS)
Penny, M. M.; Smith, S. D.; Mikatarian, R. R.; Ring, L. R.; Anderson, P. G.
1976-01-01
A knowledge of the structure of the rocket exhaust plumes is necessary to solve problems involving plume signatures, base heating, plume/surface interactions, etc. An algorithm is presented which treats the viscous flow of multiphase chemically reacting fluids in a two-dimensional or axisymmetric supersonic flow field. The gas-particle flow solution is fully coupled with the chemical kinetics calculated using an implicit scheme to calculate chemical production rates. Viscous effects include chemical species diffusion with the viscosity coefficient calculated using a two-equation turbulent kinetic energy model.
A united event grand canonical Monte Carlo study of partially doped polyaniline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.
2013-12-28
A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less
Computational aspects of unsteady flows
NASA Technical Reports Server (NTRS)
Cebeci, T.; Carr, L. W.; Khattab, A. A.; Schimke, S. M.
1985-01-01
The calculation of unsteady flows and the development of numerical methods for solving unsteady boundary layer equations and their application to the flows around important configurations such as oscillating airfoils are presented. A brief review of recent work is provided with emphasis on the need for numerical methods which can overcome possible problems associated with flow reversal and separation. The zig-zag and characteristic box schemes are described in this context, and when embodied in a method which permits interaction between solutions of inviscid and viscous equations, the characteristic box scheme is shown to avoid the singularity associated with boundary layer equations and prescribed pressure gradient. Calculations were performed for a cylinder started impulsively from rest and oscillating airfoils. The results are presented and discussed. It is conlcuded that turbulence models based on an algebraic specification of eddy viscosity can be adequate, that location of translation is important to the calculation of the location of flow separation and, therefore, to the overall lift of an oscillating airfoil.
Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav
2015-01-28
We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y; Southern Medical University, Guangzhou; Tian, Z
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less
NASA Technical Reports Server (NTRS)
Wolf, R. A.; Kamide, Y.
1983-01-01
Advanced techniques considered by Kamide et al. (1981) seem to have the potential for providing observation-based high time resolution pictures of the global ionospheric current and electric field patterns for interesting events. However, a reliance on the proposed magnetogram-inversion schemes for the deduction of global ionospheric current and electric field patterns requires proof that reliable results are obtained. 'Theoretical' tests of the accuracy of the magnetogram inversion schemes have, therefore, been considered. The present investigation is concerned with a test, involving the developed KRM algorithm and the Rice Convection Model (RCM). The test was successful in the sense that there was overall agreement between electric fields and currents calculated by the RCM and KRM schemes.
Nicolopoulou, E P; Ztoupis, I N; Karabetsos, E; Gonos, I F; Stathopulos, I A
2015-04-01
The second round of an interlaboratory comparison scheme on radio frequency electromagnetic field measurements has been conducted in order to evaluate the overall performance of laboratories that perform measurements in the vicinity of mobile phone base stations and broadcast antenna facilities. The participants recorded the electric field strength produced by two high frequency signal generators inside an anechoic chamber in three measurement scenarios with the antennas transmitting each time different signals at the FM, VHF, UHF and GSM frequency bands. In each measurement scenario, the participants also used their measurements in order to calculate the relative exposure ratios. The results were evaluated in each test level calculating performance statistics (z-scores and En numbers). Subsequently, possible sources of errors for each participating laboratory were discussed, and the overall evaluation of their performances was determined by using an aggregated performance statistic. A comparison between the two rounds proves the necessity of the scheme. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Woo Kim, Hyun; Rhee, Young Min
2012-07-30
Recently, many polarizable force fields have been devised to describe induction effects between molecules. In popular polarizable models based on induced dipole moments, atomic polarizabilities are the essential parameters and should be derived carefully. Here, we present a parameterization scheme for atomic polarizabilities using a minimization target function containing both molecular and atomic information. The main idea is to adopt reference data only from quantum chemical calculations, to perform atomic polarizability parameterizations even when relevant experimental data are scarce as in the case of electronically excited molecules. Specifically, our scheme assigns the atomic polarizabilities of any given molecule in such a way that its molecular polarizability tensor is well reproduced. We show that our scheme successfully works for various molecules in mimicking dipole responses not only in ground states but also in valence excited states. The electrostatic potential around a molecule with an externally perturbing nearby charge also exhibits a near-quantitative agreement with the reference data from quantum chemical calculations. The limitation of the model with isotropic atoms is also discussed to examine the scope of its applicability. Copyright © 2012 Wiley Periodicals, Inc.
Experimental realization of self-guided quantum coherence freezing
NASA Astrophysics Data System (ADS)
Yu, Shang; Wang, Yi-Tao; Ke, Zhi-Jin; Liu, Wei; Zhang, Wen-Hao; Chen, Geng; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can
2017-12-01
Quantum coherence is the most essential characteristic of quantum physics, specifcially, when it is subject to the resource-theoretical framework, it is considered as the most fundamental resource for quantum techniques. Other quantum resources, e.g., entanglement, are all based on coherence. Therefore, it becomes urgently important to learn how to preserve coherence in quantum channels. The best preservation is coherence freezing, which has been studied recently. However, in these studies, the freezing condition is theoretically calculated, and there still lacks a practical way to achieve this freezing; in addition the channels are usually fixed, but actually, there are also degrees of freedom that can be used to adapt the channels to quantum states. Here we develop a self-guided quantum coherence freezing method, which can guide either the quantum channels (tunable-channel scheme with upgraded channels) or the initial state (fixed-channel scheme) to the coherence-freezing zone from any starting estimate. Specifically, in the fixed-channel scheme, the final-iterative quantum states all satisfy the previously calculated freezing condition. This coincidence demonstrates the validity of our method. Our work will be helpful for the better protection of quantum coherence.
Qi, Shuanhu; Schmid, Friederike
2017-11-08
We present a multiscale hybrid particle-field scheme for the simulation of relaxation and diffusion behavior of soft condensed matter systems. It combines particle-based Brownian dynamics and field-based local dynamics in an adaptive sense such that particles can switch their level of resolution on the fly. The switching of resolution is controlled by a tuning function which can be chosen at will according to the geometry of the system. As an application, the hybrid scheme is used to study the kinetics of interfacial broadening of a polymer blend, and is validated by comparing the results to the predictions from pure Brownian dynamics and pure local dynamics calculations.
Tua, Camilla; Nessi, Simone; Rigamonti, Lucia; Dolci, Giovanni; Grosso, Mario
2017-04-01
In recent years, alternative food supply chains based on short distance production and delivery have been promoted as being more environmentally friendly than those applied by the traditional retailing system. An example is the supply of seasonal and possibly locally grown fruit and vegetables directly to customers inside a returnable crate (the so-called 'box scheme'). In addition to other claimed environmental and economic advantages, the box scheme is often listed among the packaging waste prevention measures. To check whether such a claim is soundly based, a life cycle assessment was carried out to verify the real environmental effectiveness of the box scheme in comparison to the Italian traditional distribution. The study focused on two reference products, carrots and apples, which are available in the crate all year round. An experience of a box scheme carried out in Italy was compared with some traditional scenarios where the product is distributed loose or packaged at the large-scale retail trade. The packaging waste generation, 13 impact indicators on environment and human health and energy consumptions were calculated. Results show that the analysed experience of the box scheme, as currently managed, cannot be considered a packaging waste prevention measure when compared with the traditional distribution of fruit and vegetables. The weaknesses of the alternative system were identified and some recommendations were given to improve its environmental performance.
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Full-scale computation for all the thermoelectric property parameters of half-Heusler compounds
Hong, A. J.; Li, L.; He, R.; ...
2016-03-07
The thermoelectric performance of materials relies substantially on the band structures that determine the electronic and phononic transports, while the transport behaviors compete and counter-act for the power factor PF and figure-of-merit ZT. These issues make a full-scale computation of the whole set of thermoelectric parameters particularly attractive, while a calculation scheme of the electronic and phononic contributions to thermal conductivity remains yet challenging. In this work, we present a full-scale computation scheme based on the first-principles calculations by choosing a set of doped half- Heusler compounds as examples for illustration. The electronic structure is computed using the WIEN2k codemore » and the carrier relaxation times for electrons and holes are calculated using the Bardeen and Shockley’s deformation potential (DP) theory. The finite-temperature electronic transport is evaluated within the framework of Boltzmann transport theory. In sequence, the density functional perturbation combined with the quasi-harmonic approximation and the Klemens’ equation is implemented for calculating the lattice thermal conductivity of carrier-doped thermoelectric materials such as Tidoped NbFeSb compounds without losing a generality. The calculated results show good agreement with experimental data. Lastly, the present methodology represents an effective and powerful approach to calculate the whole set of thermoelectric properties for thermoelectric materials.« less
Full-scale computation for all the thermoelectric property parameters of half-Heusler compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, A. J.; Li, L.; He, R.
The thermoelectric performance of materials relies substantially on the band structures that determine the electronic and phononic transports, while the transport behaviors compete and counter-act for the power factor PF and figure-of-merit ZT. These issues make a full-scale computation of the whole set of thermoelectric parameters particularly attractive, while a calculation scheme of the electronic and phononic contributions to thermal conductivity remains yet challenging. In this work, we present a full-scale computation scheme based on the first-principles calculations by choosing a set of doped half- Heusler compounds as examples for illustration. The electronic structure is computed using the WIEN2k codemore » and the carrier relaxation times for electrons and holes are calculated using the Bardeen and Shockley’s deformation potential (DP) theory. The finite-temperature electronic transport is evaluated within the framework of Boltzmann transport theory. In sequence, the density functional perturbation combined with the quasi-harmonic approximation and the Klemens’ equation is implemented for calculating the lattice thermal conductivity of carrier-doped thermoelectric materials such as Tidoped NbFeSb compounds without losing a generality. The calculated results show good agreement with experimental data. Lastly, the present methodology represents an effective and powerful approach to calculate the whole set of thermoelectric properties for thermoelectric materials.« less
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
NASA Technical Reports Server (NTRS)
Jameson, A.
1975-01-01
The use of a fast elliptic solver in combination with relaxation is presented as an effective way to accelerate the convergence of transonic flow calculations, particularly when a marching scheme can be used to treat the supersonic zone in the relaxation process.
Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.
Suk, Heejun
2016-07-01
MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Ouyang, Lizhi
A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was that the Co-OH bond was weak. This, together with the ongoing projects studying different Vitamin B12 derivatives, might help us to answer questions about the Co-C cleavage of the B12 coenzyme, which is involved in many important B12 enzymatic reactions.
Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho
2015-12-01
A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times higher contrast-to-noise ratio, 1.04 times higher universal quality index, and 1.39 times higher normalized mutual information. © The Author(s) 2014.
γ5 in the four-dimensional helicity scheme
NASA Astrophysics Data System (ADS)
Gnendiger, C.; Signer, A.
2018-05-01
We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.
RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yalinewich, Almog; Steinberg, Elad; Sari, Re’em
2015-02-01
We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
Sixth- and eighth-order Hermite integrator for N-body simulations
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro
2008-10-01
We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podgorsak, A; Bednarek, D; Rudin, S
2016-06-15
Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less
Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G
2018-05-30
Combined quantum mechanical and molecular mechanical (QM/MM) methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM-MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM-MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM-MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamaguchi, Kizashi; Nishihara, Satomichi; Saito, Toru
First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level.more » In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.« less
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
Detection of cat-eye effect echo based on unit APD
NASA Astrophysics Data System (ADS)
Wu, Dong-Sheng; Zhang, Peng; Hu, Wen-Gang; Ying, Jia-Ju; Liu, Jie
2016-10-01
The cat-eye effect echo of optical system can be detected based on CCD, but the detection range is limited within several kilometers. In order to achieve long-range even ultra-long-range detection, it ought to select APD as detector because of the high sensitivity of APD. The detection system of cat-eye effect echo based on unit APD is designed in paper. The implementation scheme and key technology of the detection system is presented. The detection performances of the detection system including detection range, detection probability and false alarm probability are modeled. Based on the model, the performances of the detection system are analyzed using typical parameters. The results of numerical calculation show that the echo signal-to-noise ratio is greater than six, the detection probability is greater than 99.9% and the false alarm probability is less tan 0.1% within 20 km detection range. In order to verify the detection effect, we built the experimental platform of detection system according to the design scheme and carry out the field experiments. The experimental results agree well with the results of numerical calculation, which prove that the detection system based on the unit APD is feasible to realize remote detection for cat-eye effect echo.
Schmidt, M; Fürstenau, N
1999-05-01
A three-wavelength-based passive quadrature digital phase-demodulation scheme has been developed for readout of fiber-optic extrinsic Fabry-Perot interferometer vibration, acoustic, and strain sensors. This scheme uses a superluminescent diode light source with interference filters in front of the photodiodes and real-time arctan calculation. Quasi-static strain and dynamic vibration sensing with up to an 80-kHz sampling rate is demonstrated. Periodic nonlinearities owing to dephasing with increasing fringe number are corrected for with a suitable algorithm, resulting in significant improvement of the linearity of the sensor characteristics.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
NASA Astrophysics Data System (ADS)
Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril
2012-06-01
This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
Systematic comparison of jet energy-loss schemes in a realistic hydrodynamic medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, Steffen A.; Majumder, Abhijit; Gale, Charles
2009-02-15
We perform a systematic comparison of three different jet energy-loss approaches. These include the Armesto-Salgado-Wiedemann scheme based on the approach of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z/ASW), the higher twist (HT) approach and a scheme based on the Arnold-Moore-Yaffe (AMY) approach. In this comparison, an identical medium evolution will be utilized for all three approaches: this entails not only the use of the same realistic three-dimensional relativistic fluid dynamics (RFD) simulation, but also the use of identical initial parton-distribution functions and final fragmentation functions. We are, thus, in a unique position to not only isolate fundamental differences between the various approaches butmore » also make rigorous calculations for different experimental measurements using state of the art components. All three approaches are reduced to versions containing only one free tunable parameter, this is then related to the well-known transport parameter q. We find that the parameters of all three calculations can be adjusted to provide a good description of inclusive data on R{sub AA} vs transverse momentum. However, we do observe slight differences in their predictions for the centrality and azimuthal angular dependence of R{sub AA} vs p{sub T}. We also note that the values of the transport coefficient q in the three approaches to describe the data differ significantly.« less
Chemical shifts of diamagnetic azafullerenes: (C 59N) 2 and C 59HN
NASA Astrophysics Data System (ADS)
Bühl, Michael; Curioni, Alessandro; Andreoni, Wanda
1997-08-01
13C and 15N chemical shifts have been calculated for the azafullerenes (C 59N) 2 and C 59HN using the GIAO (gauge including atomic orbitals)-SCF method based on the geometry obtained with the density functional theory BLYP scheme Our results are in good agreement with experimental data, in particular, for the "anomalous" shift of the saturated carbon. Combined with previous calculations of the structural stability and electronic as well as vibrational properties, the present findings confirm the calculated structures for both molecules and establish the [6,6]-closed configuration for the dimer.
Image watermarking capacity analysis based on Hopfield neural network
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Hongbin
2004-11-01
In watermarking schemes, watermarking can be viewed as a form of communication problems. Almost all of previous works on image watermarking capacity are based on information theory, using Shannon formula to calculate the capacity of watermarking. In this paper, we present a blind watermarking algorithm using Hopfield neural network, and analyze watermarking capacity based on neural network. In our watermarking algorithm, watermarking capacity is decided by attraction basin of associative memory.
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Soo Ya; Hong, Song-You; Lim, Kyo-Sun Sunny
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. A spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less
NASA Astrophysics Data System (ADS)
Tsai, T. C.; Chen, J. P.; Dearden, C.
2014-12-01
The wide variety of ice crystal shapes and growth habits makes it a complicated issue in cloud models. This study developed the bulk ice adaptive habit parameterization based on the theoretical approach of Chen and Lamb (1994) and introduced a 6-class hydrometeors double-moment (mass and number) bulk microphysics scheme with gamma-type size distribution function. Both the proposed schemes have been implemented into the Weather Research and Forecasting model (WRF) model forming a new multi-moment bulk microphysics scheme. Two new moments of ice crystal shape and volume are included for tracking pristine ice's adaptive habit and apparent density. A closure technique is developed to solve the time evolution of the bulk moments. For the verification of the bulk ice habit parameterization, some parcel-type (zero-dimension) calculations were conducted and compared with binned numerical calculations. The results showed that: a flexible size spectrum is important in numerical accuracy, the ice shape can significantly enhance the diffusional growth, and it is important to consider the memory of growth habit (adaptive growth) under varying environmental conditions. Also, the derived results with the 3-moment method were much closer to the binned calculations. A field campaign of DIAMET was selected to simulate in the WRF model for real-case studies. The simulations were performed with the traditional spherical ice and the new adaptive shape schemes to evaluate the effect of crystal habits. Some main features of narrow rain band, as well as the embedded precipitation cells, in the cold front case were well captured by the model. Furthermore, the simulations produced a good agreement in the microphysics against the aircraft observations in ice particle number concentration, ice crystal aspect ratio, and deposition heating rate especially within the temperature region of ice secondary multiplication production.
A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases
NASA Astrophysics Data System (ADS)
Shi, Hongli; Luo, Shuqian
2010-12-01
In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.
Stoecklin, T
2008-09-01
In this paper a new propagation scheme is proposed for atom-diatom reactive calculations using a negative imaginary potential (NIP) within a time independent approach. It is based on the calculation of a rotationally adiabatic basis set, the neglected coupling terms being re-added in the following step of the propagation. The results of this approach, which we call two steps rotationally adiabatic coupled states calculations (2-RACS), are compared to those obtained using the adiabatic DVR method (J. C. Light and Z. Bazic, J. Chem. Phys., 1987, 87, 4008; C. Leforestier, J. Chem. Phys., 1991, 94, 6388), to the NIP coupled states results of the team of Baer (D. M. Charutz, I. Last and M. Baer, J. Chem. Phys., 1997, 106, 7654) and to the exact results obtained by Zhang (J. Z. H. Zhang and W. H. Miller, J. Chem. Phys., 1989, 91, 1528) for the D + H(2) reaction. The example of implementation of our method of computation of the adiabatic basis will be given here in the coupled states approximation, as this method has proved to be very efficient in many cases and is quite fast.
RI/MOM and RI/SMOM renormalization of overlap quark bilinears on domain wall fermion configurations
NASA Astrophysics Data System (ADS)
Bi, Yujiang; Cai, Hao; Chen, Ying; Gong, Ming; Liu, Keh-Fei; Liu, Zhaofeng; Yang, Yi-Bo; χ QCD Collaboration
2018-05-01
Renormalization constants (RCs) of overlap quark bilinear operators on 2 +1 -flavor domain wall fermion configurations are calculated by using the RI/MOM and RI/SMOM schemes. The scale independent RC for the axial vector current is computed by using a Ward identity. Then the RCs for the quark field and the vector, tensor, scalar, and pseudoscalar operators are calculated in both the RI/MOM and RI/SMOM schemes. The RCs are converted to the MS ¯ scheme and we compare the numerical results from using the two intermediate schemes. The lattice size is 4 83×96 and the inverse spacing 1 /a =1.730 (4 ) GeV .
NASA Astrophysics Data System (ADS)
Kalmykov, N. N.; Ostapchenko, S. S.; Werner, K.
An extensive air shower (EAS) calculation scheme based on cascade equations and some EAS characteristics for energies 1014 -1017 eV are presented. The universal hadronic interaction model NEXUS is employed to provide the necessary data concerning hadron-air collisions. The influence of model assumptions on the longitudinal EAS development is discussed in the framework of the NEXUS and QGSJET models. Applied to EAS simulations, perspectives of combined Monte Carlo and numerical methods are considered.
NASA Astrophysics Data System (ADS)
Bedarev, I. A.; Temerbekov, V. M.; Fedorov, A. V.
2018-03-01
The initiation of detonation in a reactive mixture by a small-diameter spherical projectile launched at supersonic velocity was studied for a reduced kinetic scheme of chemical reactions. A mathematical technique based on the ANSYS Fluent package was developed for this purpose. Numerical and experimental data on the flow regimes and detonation cell sizes are compared. There is agreement between the calculated and experimental flow patterns and detonation cell sizes for each regime.
Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten
2016-08-09
The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.
An IDS Alerts Aggregation Algorithm Based on Rough Set Theory
NASA Astrophysics Data System (ADS)
Zhang, Ru; Guo, Tao; Liu, Jianyi
2018-03-01
Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.
Positive-negative corresponding normalized ghost imaging based on an adaptive threshold
NASA Astrophysics Data System (ADS)
Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.
2016-11-01
Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.
A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks
Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan
2014-01-01
Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Liang; Abild-Pedersen, Frank
On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less
On the Study of Cognitive Bidirectional Relaying with Asymmetric Traffic Demands
NASA Astrophysics Data System (ADS)
Ji, Xiaodong
2015-05-01
In this paper, we consider a cognitive radio network scenario, where two primary users want to exchange information with each other and meanwhile, one secondary node wishes to send messages to a cognitive base station. To meet the target quality of service (QoS) of the primary users and raise the communication opportunity of the secondary nodes, a cognitive bidirectional relaying (BDR) scheme is examined. First, system outage probabilities of conventional direct transmission and BDR schemes are presented. Next, a new system parameter called operating region is defined and calculated, which indicates in which position a secondary node can be a cognitive relay to assist the primary users. Then, a cognitive BDR scheme is proposed, giving a transmission protocol along with a time-slot splitting algorithm between the primary and secondary transmissions. Information-theoretic metric of ergodic capacity is studied for the cognitive BDR scheme to evaluate its performance. Simulation results show that with the proposed scheme, the target QoS of the primary users can be guaranteed, while increasing the communication opportunity for the secondary nodes.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun
2007-11-01
A coherent optical en/decoder based on photonic crystal (PhC) for optical code-division-multiple (OCDM)-based optical label (OCDM-OL) optical packets switching (OPS) networks is proposed in this paper. In this scheme, the optical pulse phase and time delay can be flexibly controlled by the photonic crystal phase shifter and delayer using the appropriate design of fabrication. In this design, the combination calculation of the impurity and normal period layers is applied, according to the PhC transmission matrix theorem. The design and theoretical analysis of the PhC-based optical coherent en/decoder is mainly focused. In addition, the performances of the PhC-based optical en/decoders are analyzed in detail. The reflection, the transmission, delay characteristic and the optical spectrum of pulse en/decoded are studied for the waves tuned in the photonic band-gap by the numerical calculation, taking into account 1-Dimension (1D) PhC. Theoretical analysis and numerical results show that optical pulse is achieved to properly phase modulation and time delay by the proposed scheme, optical label based on OCDM is rewrote successfully by new code for OCDM-based OPS (OCDM-OPS), and an over 8.5 dB ration of auto- and cross-correlation is gained, which demonstrates the applicability of true pulse phase modulation in a number of applications.
NASA Technical Reports Server (NTRS)
Turney, G. E.; Petrik, E. J.; Kieffer, A. W.
1972-01-01
A two-dimensional, transient, heat-transfer analysis was made to determine the temperature response in the core of a conceptual space-power nuclear reactor following a total loss of reactor coolant. With loss of coolant from the reactor, the controlling mode of heat transfer is thermal radiation. In one of the schemes considered for removing decay heat from the core, it was assumed that the 4 pi shield which surrounds the core acts as a constant-temperature sink (temperature, 700 K) for absorption of thermal radiation from the core. Results based on this scheme of heat removal show that melting of fuel in the core is possible only when the emissivity of the heat-radiating surfaces in the core is less than about 0.40. In another scheme for removing the afterheat, the core centerline fuel pin was replaced by a redundant, constant temperature, coolant channel. Based on an emissivity of 0.20 for all material surfaces in the core, the calculated maximum fuel temperature for this scheme of heat removal was 2840 K, or about 90 K less than the melting temperature of the UN fuel.
New method for estimating daily global solar radiation over sloped topography in China
NASA Astrophysics Data System (ADS)
Shi, Guoping; Qiu, Xinfa; Zeng, Yan
2018-03-01
A new scheme for the estimation of daily global solar radiation over sloped topography in China is developed based on the Iqbal model C and MODIS cloud fraction. The effects of topography are determined using a digital elevation model. The scheme is tested using observations of solar radiation at 98 stations in China, and the results show that the mean absolute bias error is 1.51 MJ m-2 d-1 and the mean relative absolute bias error is 10.57%. Based on calculations using this scheme, the distribution of daily global solar radiation over slopes in China on four days in the middle of each season (15 January, 15 April, 15 July and 15 October 2003) at a spatial resolution of 1 km × 1 km are analyzed. To investigate the effects of topography on global solar radiation, the results determined in four mountains areas (Tianshan, Kunlun Mountains, Qinling, and Nanling) are discussed, and the typical characteristics of solar radiation over sloped surfaces revealed. In general, the new scheme can produce reasonable characteristics of solar radiation distribution at a high spatial resolution in mountain areas, which will be useful in analyses of mountain climate and planning for agricultural production.
NASA Astrophysics Data System (ADS)
Sarkar, Debdeep; Srivastava, Kumar Vaibhav
2017-02-01
In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.
A more accurate scheme for calculating Earth's skin temperature
NASA Astrophysics Data System (ADS)
Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden
2009-02-01
The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.
Thermophysical properties of paramagnetic Fe from first principles
NASA Astrophysics Data System (ADS)
Ehteshami, Hossein; Korzhavyi, Pavel A.
2017-12-01
A computationally efficient, yet general, free-energy modeling scheme is developed based on first-principles calculations. Finite-temperature disorder associated with the fast (electronic and magnetic) degrees of freedom is directly included in the electronic structure calculations, whereas the vibrational free energy is evaluated by a proposed model that uses elastic constants to calculate average sound velocity of the quasiharmonic Debye model. The proposed scheme is tested by calculating the lattice parameter, heat capacity, and single-crystal elastic constants of α -, γ -, and δ -iron as functions of temperature in the range 1000-1800 K. The calculations accurately reproduce the well-established experimental data on thermal expansion and heat capacity of γ - and δ -iron. Electronic and magnetic excitations are shown to account for about 20% of the heat capacity for the two phases. Nonphonon contributions to thermal expansion are 12% and 10% for α - and δ -Fe and about 30% for γ -Fe. The elastic properties predicted by the model are in good agreement with those obtained in previous theoretical treatments of paramagnetic phases of iron, as well as with the bulk moduli derived from isothermal compressibility measurements [N. Tsujino et al., Earth Planet. Sci. Lett. 375, 244 (2013), 10.1016/j.epsl.2013.05.040]. Less agreement is found between theoretically calculated and experimentally derived single-crystal elastic constants of γ - and δ -iron.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.
2007-01-01
During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.
Monotonic Derivative Correction for Calculation of Supersonic Flows
ERIC Educational Resources Information Center
Bulat, Pavel V.; Volkov, Konstantin N.
2016-01-01
Aim of the study: This study examines numerical methods for solving the problems in gas dynamics, which are based on an exact or approximate solution to the problem of breakdown of an arbitrary discontinuity (the Riemann problem). Results: Comparative analysis of finite difference schemes for the Euler equations integration is conducted on the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com
We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less
Development of new flux splitting schemes. [computational fluid dynamics algorithms
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1992-01-01
Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.
Prediction of the Thrust Performance and the Flowfield of Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Wang, T.-S.
1990-01-01
In an effort to improve the current solutions in the design and analysis of liquid propulsive engines, a computational fluid dynamics (CFD) model capable of calculating the reacting flows from the combustion chamber, through the nozzle to the external plume, was developed. The Space Shuttle Main Engine (SSME) fired at sea level, was investigated as a sample case. The CFD model, FDNS, is a pressure based, non-staggered grid, viscous/inviscid, ideal gas/real gas, reactive code. An adaptive upwinding differencing scheme is employed for the spatial discretization. The upwind scheme is based on fourth order central differencing with fourth order damping for smooth regions, and second order central differencing with second order damping for shock capturing. It is equipped with a CHMQGM equilibrium chemistry algorithm and a PARASOL finite rate chemistry algorithm using the point implicit method. The computed flow results and performance compared well with those of other standard codes and engine hot fire test data. In addition, the transient nozzle flowfield calculation was also performed to demonstrate the ability of FDNS in capturing the flow separation during the startup process.
NASA Astrophysics Data System (ADS)
Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng
2005-02-01
The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
Ding, Pan; Gong, Xue-Qing
2016-05-01
Titanium dioxide (TiO2) is an important metal oxide that has been used in many different applications. TiO2 has also been widely employed as a model system to study basic processes and reactions in surface chemistry and heterogeneous catalysis. In this work, we investigated the (011) surface of rutile TiO2 by focusing on its reconstruction. Density functional theory calculations aided by a genetic algorithm based optimization scheme were performed to extensively sample the potential energy surfaces of reconstructed rutile TiO2 structures that obey (2 × 1) periodicity. A lot of stable surface configurations were located, including the global-minimum configuration that was proposed previously. The wide variety of surface structures determined through the calculations performed in this work provide insight into the relationship between the atomic configuration of a surface and its stability. More importantly, several analytical schemes were proposed and tested to gauge the differences and similarities among various surface structures, aiding the construction of the complete pathway for the reconstruction process.
Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee
NASA Astrophysics Data System (ADS)
Clerc, Thomas
With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel
2018-07-01
This work presents a new development based on the condensation scheme proposed by Chamorro and Pérez, in which new terms to correct the frozen molecular orbital approximation have been introduced (improved frontier molecular orbital approximation). The changes performed on the original development allow taking into account the orbital relaxation effects, providing equivalent results to those achieved by the finite difference approximation and leading also to a methodology with great advantages. Local reactivity indices based on this new development have been obtained for a sample set of molecules and they have been compared with those indices based on the frontier molecular orbital and finite difference approximations. A new definition based on the improved frontier molecular orbital methodology for the dual descriptor index is also shown. In addition, taking advantage of the characteristics of the definitions obtained with the new condensation scheme, the descriptor local philicity is analysed by separating the components corresponding to the frontier molecular orbital approximation and orbital relaxation effects, analysing also the local parameter multiphilic descriptor in the same way. Finally, the effect of using the basis set is studied and calculations using DFT, CI and Möller-Plesset methodologies are performed to analyse the consequence of different electronic-correlation levels.
Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems
NASA Astrophysics Data System (ADS)
Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun
2018-01-01
In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.
Code-Time Diversity for Direct Sequence Spread Spectrum Systems
Hassan, A. Y.
2014-01-01
Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-04-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-09-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Yamaguchi, Kizashi; Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka
2015-01-01
First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.
Reducing numerical diffusion for incompressible flow calculations
NASA Technical Reports Server (NTRS)
Claus, R. W.; Neely, G. M.; Syed, S. A.
1984-01-01
A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
A detailed investigation of proposed gas-phase syntheses of ammonia in dense interstellar clouds
NASA Technical Reports Server (NTRS)
Herbst, Eric; Defrees, D. J.; Mclean, A. D.
1987-01-01
The initial reactions of the Herbst and Klemperer (1973) and the Dalgarno (1974) schemes (I and II, respectively) for the gas-phase synthesis of ammonia in dense interstellar clouds were investigated. The rate of the slightly endothermic reaction between N(+) and H2 to yield NH(+) and H (scheme I) under interstellar conditions was reinvestigated under thermal and nonthermal conditions based on laboratory data. It was found that the relative importance of this reaction in synthesizing ammonia is determined by how the laboratory data at low temperature are interpreted. On the other hand, the exothermic reaction between N and H3(+) to form NH2(+) + H (scheme II) was calculated to possess significant activation energy and, therefore, to have a negligible rate coefficient under interstellar conditions. Consequently, this reaction cannot take place appreciably in interstellar clouds.
Automatic Phase Calibration for RF Cavities using Beam-Loading Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
Precise calibration of the cavity phase signals is necessary for the operation of any particle accelerator. For many systems this requires human in the loop adjustments based on measurements of the beam parameters downstream. Some recent work has developed a scheme for the calibration of the cavity phase using beam measurements and beam-loading however this scheme is still a multi-step process that requires heavy automation or human in the loop. In this paper we analyze a new scheme that uses only RF signals reacting to beam-loading to calculate the phase of the beam relative to the cavity. This technique couldmore » be used in slow control loops to provide real-time adjustment of the cavity phase calibration without human intervention thereby increasing the stability and reliability of the accelerator.« less
Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.
Improvement of the 2D/1D Method in MPACT Using the Sub-Plane Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
Oak Ridge National Laboratory and the University of Michigan are jointly developing the MPACTcode to be the primary neutron transport code for the Virtual Environment for Reactor Applications (VERA). To solve the transport equation, MPACT uses the 2D/1D method, which decomposes the problem into a stack of 2D planes that are then coupled with a 1D axial calculation. MPACT uses the Method of Characteristics for the 2D transport calculations and P3 for the 1D axial calculations, then accelerates the solution using the 3D Coarse mesh Finite Dierence (CMFD) method. Increasing the number of 2D MOC planes will increase the accuracymore » of the alculation, but will increase the computational burden of the calculations and can cause slow convergence or instability. To prevent these problems while maintaining accuracy, the sub-plane scheme has been implemented in MPACT. This method sub-divides the MOC planes into sub-planes, refining the 1D P3 and 3D CMFD calculations without increasing the number of 2D MOC planes. To test the sub-plane scheme, three of the VERA Progression Problems were selected: Problem 3, a single assembly problem; Problem 4, a 3x3 assembly problem with control rods and pyrex burnable poisons; and Problem 5, a quarter core problem. These three problems demonstrated that the sub-plane scheme can accurately produce intra-plane axial flux profiles that preserve the accuracy of the fine mesh solution. The eigenvalue dierences are negligibly small, and dierences in 3D power distributions are less than 0.1% for realistic axial meshes. Furthermore, the convergence behavior with the sub-plane scheme compares favorably with the conventional 2D/1D method, and the computational expense is decreased for all calculations due to the reduction in expensive MOC calculations.« less
Integral processing in beyond-Hartree-Fock calculations
NASA Technical Reports Server (NTRS)
Taylor, P. R.
1986-01-01
The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.
Cycle of a closed gas-turbine plant with a gas-dynamic energy-separation device
NASA Astrophysics Data System (ADS)
Leontiev, A. I.; Burtsev, S. A.
2017-09-01
The efficiency of closed gas-turbine space-based plants is analyzed. The weight-size characteristics of closed gas-turbine plants are shown in many respects as determined by the refrigerator-radiator parameters. The scheme of closed gas-turbine plants with a gas-dynamic temperature-stratification device is proposed, and a calculation model is developed. This model shows that the cycle efficiency decreases by 2% in comparison with that of the closed gas-turbine plants operating by the traditional scheme with increasing temperature at the output from the refrigerator-radiator by 28 K and decreasing its area by 13.7%.
Application of Chimera Grid Scheme to Combustor Flowfields at all Speeds
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Chen, Kuo-Huey
1997-01-01
A CFD method for solving combustor flowfields at all speeds on complex configurations is presented. The approach is based on the ALLSPD-3D code which uses the compressible formulation of the flow equations including real gas effects, nonequilibrium chemistry and spray combustion. To facilitate the analysis of complex geometries, the chimera grid method is utilized. To the best of our knowledge, this is the first application of the chimera scheme to reacting flows. In order to evaluate the effectiveness of this numerical approach, several benchmark calculations of subsonic flows are presented. These include steady and unsteady flows, and bluff-body stabilized spray and premixed combustion flames.
Fluctuation of the electronic coupling in DNA: Multistate versus two-state model
NASA Astrophysics Data System (ADS)
Voityuk, Alexander A.
2007-05-01
The electronic coupling for hole transfer between guanine bases G in the DNA duplex (GT) 6GTG(TG) 6 is studied using a QM/MD approach. The coupling V is calculated for 10 thousand snapshots within the two- and multistate state Generalized Mulliken-Hush model. We find that the two-state scheme considerably underestimates the rate of the hole transfer within the π stack. Moreover, the probability distributions computed with the two- and multistate schemes are quite different. It has been found that large fluctuations of V2, which are at least an order of magnitude larger than its average value, occur roughly every 1 ps.
Image encryption algorithm based on multiple mixed hash functions and cyclic shift
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhu, Xiaoqiang; Wu, Xiangjun; Zhang, Yingqian
2018-08-01
This paper proposes a new one-time pad scheme for chaotic image encryption that is based on the multiple mixed hash functions and the cyclic-shift function. The initial value is generated using both information of the plaintext image and the chaotic sequences, which are calculated from the SHA1 and MD5 hash algorithms. The scrambling sequences are generated by the nonlinear equations and logistic map. This paper aims to improve the deficiencies of traditional Baptista algorithms and its improved algorithms. We employ the cyclic-shift function and piece-wise linear chaotic maps (PWLCM), which give each shift number the characteristics of chaos, to diffuse the image. Experimental results and security analysis show that the new scheme has better security and can resist common attacks.
NESSY: NLTE spectral synthesis code for solar and stellar atmospheres
NASA Astrophysics Data System (ADS)
Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.
2017-07-01
Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.
Parallel computation of fluid-structural interactions using high resolution upwind schemes
NASA Astrophysics Data System (ADS)
Hu, Zongjun
An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different frequencies and incidence angles. The Zha CUSP schemes are the first time to be applied in moving grid systems and 2D and 3D calculations. The implicit Gauss-Seidel iteration with dual time stepping is the first time to be used for moving grid systems. The NASA flutter cascade is the first time to be calculated in full scale.
Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru
2012-03-28
We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.
NASA Astrophysics Data System (ADS)
Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.
2016-12-01
A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khabibullin, R. A., E-mail: khabibullin@isvch.ru; Shchavruk, N. V.; Klochkov, A. N.
The dependences of the electronic-level positions and transition oscillator strengths on an applied electric field are studied for a terahertz quantum-cascade laser (THz QCL) with the resonant-phonon depopulation scheme, based on a cascade consisting of three quantum wells. The electric-field strengths for two characteristic states of the THz QCL under study are calculated: (i) “parasitic” current flow in the structure when the lasing threshold has not yet been reached; (ii) the lasing threshold is reached. Heat-transfer processes in the THz QCL under study are simulated to determine the optimum supply and cooling conditions. The conditions of thermocompression bonding of themore » laser ridge stripe with an n{sup +}-GaAs conductive substrate based on Au–Au are selected to produce a mechanically stronger contact with a higher thermal conductivity.« less
SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Zhou, Z; Wang, J
Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directlymore » connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaodong; Xia, Yidong; Luo, Hong
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...
2016-10-05
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Introduction of the Floquet-Magnus expansion in solid-state nuclear magnetic resonance spectroscopy.
Mananga, Eugène S; Charpentier, Thibault
2011-07-28
In this article, we present an alternative expansion scheme called Floquet-Magnus expansion (FME) used to solve a time-dependent linear differential equation which is a central problem in quantum physics in general and solid-state nuclear magnetic resonance (NMR) in particular. The commonly used methods to treat theoretical problems in solid-state NMR are the average Hamiltonian theory (AHT) and the Floquet theory (FT), which have been successful for designing sophisticated pulse sequences and understanding of different experiments. To the best of our knowledge, this is the first report of the FME scheme in the context of solid state NMR and we compare this approach with other series expansions. We present a modified FME scheme highlighting the importance of the (time-periodic) boundary conditions. This modified scheme greatly simplifies the calculation of higher order terms and shown to be equivalent to the Floquet theory (single or multimode time-dependence) but allows one to derive the effective Hamiltonian in the Hilbert space. Basic applications of the FME scheme are described and compared to previous treatments based on AHT, FT, and static perturbation theory. We discuss also the convergence aspects of the three schemes (AHT, FT, and FME) and present the relevant references. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Sun, Xiao; Dai, Qingli; Bilgen, Onur
2018-05-01
A Macro-Fiber Composite (MFC) based active serrated microflap is designed in this research for wind turbine blades. Its fatigue load reduction potential is evaluated in normal operating conditions. The force and displacement output of the MFC-based actuator are simulated using a bimorph beam model. The work done by the aerodynamic, centripetal and gravitational forces acting on the microflap were calculated to determine the required capacity of the MFC-based actuator. MFC-based actuators with a lever mechanical linkage are designed to achieve the required force and displacement to activate the microflap. A feedback control scheme is designed to control the microflap during operation. Through an aerodynamic-aeroelastic time marching simulation with the designed control scheme, the time responses of the wind turbine blades are obtained. The fatigue analysis shows that the serrated microflap can reduce the standard deviation of the blade root flapwise bending moment and the fatigue damage equivalent loads.
Cruikshank, Benjamin; Jacobs, Kurt
2017-07-21
von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.
An Upwind Multigrid Algorithm for Calculating Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.
1993-01-01
An algorithm is described that calculates inviscid, laminar, and turbulent flows on triangular meshes with an upwind discretization. A brief description of the base solver and the multigrid implementation is given, followed by results that consist mainly of convergence rates for inviscid and viscous flows over a NACA four-digit airfoil section. The results show that multigrid does accelerate convergence when the same relaxation parameters that yield good single-grid performance are used; however, larger gains in performance can be realized by doing less work in the relaxation scheme.
NASA Astrophysics Data System (ADS)
Aichi, M.; Tokunaga, T.
2006-12-01
In the fields that experienced both significant drawdown/land subsidence and the recovery of groundwater potential, temporal change of the effective stress in the clayey layers is not simple. Conducting consolidation tests of core samples is a straightforward approach to know the pre-consolidation stress. However, especially in the urban area, the cost of boring and the limitation of sites for boring make it difficult to carry out enough number of tests. Numerical simulation to reproduce stress history can contribute to selecting boring sites and to complement the results of the laboratory tests. To trace the effective stress profile in the clayey layers by numerical simulation, discretization in the clayey layers should be fine. At the same time, the size of the modeled domain should be large enough to calculate the effect of regional groundwater extraction. Here, we developed a new scheme to reduce memory consumption based on domain decomposition technique. A finite element model of coupled groundwater flow and land subsidence is used for the local model, and a finite difference groundwater flow model is used for the regional model. The local model is discretized to fine mesh in the clayey layers to reproduce the temporal change of pore pressure in the layers while the regional model is discretized to relatively coarse mesh to reproduce the effect of the regional groundwater extraction on the groundwater flow. We have tested this scheme by comparing the results obtained from this scheme with those from the finely gridded model for the entire calculation domain. The difference between the results of these models was small enough and our new scheme can be used for the practical problem.
NASA Astrophysics Data System (ADS)
Yao, De-Liang; Siemens, D.; Bernard, V.; Epelbaum, E.; Gasparyan, A. M.; Gegelia, J.; Krebs, H.; Meißner, Ulf-G.
2016-05-01
We present the results of a third order calculation of the pion-nucleon scattering amplitude in a chiral effective field theory with pions, nucleons and delta resonances as explicit degrees of freedom. We work in a manifestly Lorentz invariant formulation of baryon chiral perturbation theory using dimensional regularization and the extended on-mass-shell renormalization scheme. In the delta resonance sector, the on mass-shell renormalization is realized as a complex-mass scheme. By fitting the low-energy constants of the effective Lagrangian to the S- and P -partial waves a satisfactory description of the phase shifts from the analysis of the Roy-Steiner equations is obtained. We predict the phase shifts for the D and F waves and compare them with the results of the analysis of the George Washington University group. The threshold parameters are calculated both in the delta-less and delta-full cases. Based on the determined low-energy constants, we discuss the pion-nucleon sigma term. Additionally, in order to determine the strangeness content of the nucleon, we calculate the octet baryon masses in the presence of decuplet resonances up to next-to-next-to-leading order in SU(3) baryon chiral perturbation theory. The octet baryon sigma terms are predicted as a byproduct of this calculation.
Liu, Kuan-Yu; Herbert, John M
2017-10-28
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
NASA Astrophysics Data System (ADS)
Liu, Kuan-Yu; Herbert, John M.
2017-10-01
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
A Hierarchical Z-Scheme α-Fe2 O3 /g-C3 N4 Hybrid for Enhanced Photocatalytic CO2 Reduction.
Jiang, Zhifeng; Wan, Weiming; Li, Huaming; Yuan, Shouqi; Zhao, Huijun; Wong, Po Keung
2018-03-01
The challenge in the artificial photosynthesis of fossil resources from CO 2 by utilizing solar energy is to achieve stable photocatalysts with effective CO 2 adsorption capacity and high charge-separation efficiency. A hierarchical direct Z-scheme system consisting of urchin-like hematite and carbon nitride provides an enhanced photocatalytic activity of reduction of CO 2 to CO, yielding a CO evolution rate of 27.2 µmol g -1 h -1 without cocatalyst and sacrifice reagent, which is >2.2 times higher than that produced by g-C 3 N 4 alone (10.3 µmol g -1 h -1 ). The enhanced photocatalytic activity of the Z-scheme hybrid material can be ascribed to its unique characteristics to accelerate the reduction process, including: (i) 3D hierarchical structure of urchin-like hematite and preferable basic sites which promotes the CO 2 adsorption, and (ii) the unique Z-scheme feature efficiently promotes the separation of the electron-hole pairs and enhances the reducibility of electrons in the conduction band of the g-C 3 N 4 . The origin of such an obvious advantage of the hierarchical Z-scheme is not only explained based on the experimental data but also investigated by modeling CO 2 adsorption and CO adsorption on the three different atomic-scale surfaces via density functional theory calculation. The study creates new opportunities for hierarchical hematite and other metal-oxide-based Z-scheme system for solar fuel generation. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational model for fuel component supply into a combustion chamber of LRE
NASA Astrophysics Data System (ADS)
Teterev, A. V.; Mandrik, P. A.; Rudak, L. V.; Misyuchenko, N. I.
2017-12-01
A 2D-3D computational model for calculating a flow inside jet injectors that feed fuel components to a combustion chamber of a liquid rocket engine is described. The model is based on the gasdynamic calculation of compressible medium. Model software provides calculation of both one- and two-component injectors. Flow simulation in two-component injectors is realized using the scheme of separate supply of “gas-gas” or “gas-liquid” fuel components. An algorithm for converting a continuous liquid medium into a “cloud” of drops is described. Application areas of the developed model and the results of 2D simulation of injectors to obtain correction factors in the calculation formulas for fuel supply are discussed.
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.
Li, Xin; Wang, Jian; Liu, Chunyan
2015-09-25
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System
Li, Xin; Wang, Jian; Liu, Chunyan
2015-01-01
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision. PMID:26404277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeida, Leandro G.; Physics Department, Brookhaven National Laboratory, Upton, New York 11973; Sturm, Christian
2010-09-01
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the MS scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{sub {mu}} }schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f}=3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Almeida, L.
2010-04-26
Light quark masses can be determined through lattice simulations in regularization invariant momentum-subtraction (RI/MOM) schemes. Subsequently, matching factors, computed in continuum perturbation theory, are used in order to convert these quark masses from a RI/MOM scheme to the {ovr MS} scheme. We calculate the two-loop corrections in QCD to these matching factors as well as the three-loop mass anomalous dimensions for the RI/SMOM and RI/SMOM{sub {gamma}{mu}} schemes. These two schemes are characterized by a symmetric subtraction point. Providing the conversion factors in the two different schemes allows for a better understanding of the systematic uncertainties. The two-loop expansion coefficients ofmore » the matching factors for both schemes turn out to be small compared to the traditional RI/MOM schemes. For n{sub f} = 3 quark flavors they are about 0.6%-0.7% and 2%, respectively, of the leading order result at scales of about 2 GeV. Therefore, they will allow for a significant reduction of the systematic uncertainty of light quark mass determinations obtained through this approach. The determination of these matching factors requires the computation of amputated Green's functions with the insertions of quark bilinear operators. As a by-product of our calculation we also provide the corresponding results for the tensor operator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Soni, A.; Aoki, Y.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less
NASA Astrophysics Data System (ADS)
Kao, C.-Y. J.; Smith, W. S.
1999-05-01
A physically based cloud parameterization package, which includes the Arakawa-Schubert (AS) scheme for subgrid-scale convective clouds and the Sundqvist (SUN) scheme for nonconvective grid-scale layered clouds (hereafter referred to as the SUNAS cloud package), is incorporated into the National Center for Atmospheric Research (NCAR) Community Climate Model, Version 2 (CCM2). The AS scheme is used for a more reasonable heating distribution due to convective clouds and their associated precipitation. The SUN scheme allows for the prognostic computation of cloud water so that the cloud optical properties are more physically determined for shortwave and longwave radiation calculations. In addition, the formation of anvil-like clouds from deep convective systems is able to be simulated with the SUNAS package. A 10-year simulation spanning the period from 1980 to 1989 is conducted, and the effect of the cloud package on the January climate is assessed by comparing it with various available data sets and the National Center for Environmental Protection/NCAR reanalysis. Strengths and deficiencies of both the SUN and AS methods are identified and discussed. The AS scheme improves some aspects of the model dynamics and precipitation, especially with respect to the Pacific North America (PNA) pattern. CCM2's tendency to produce a westward bias of the 500 mbar stationary wave (time-averaged zonal anomalies) in the PNA sector is remedied apparently because of a less "locked-in" heating pattern in the tropics. The additional degree of freedom added by the prognostic calculation of cloud water in the SUN scheme produces interesting results in the modeled cloud and radiation fields compared with data. In general, too little cloud water forms in the tropics, while excessive cloud cover and cloud liquid water are simulated in midlatitudes. This results in a somewhat degraded simulation of the radiation budget. The overall simulated precipitation by the SUNAS package is, however, substantially improved over the original CCM2.
Research on air and missile defense task allocation based on extended contract net protocol
NASA Astrophysics Data System (ADS)
Zhang, Yunzhi; Wang, Gang
2017-10-01
Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.
A wireless sensor network based personnel positioning scheme in coal mines with blind areas.
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures.
NASA Astrophysics Data System (ADS)
Omar, M. A.; Parvataneni, R.; Zhou, Y.
2010-09-01
Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.
NASA Astrophysics Data System (ADS)
Delle Site, Luigi
2018-01-01
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures. PMID:22163446
Toward computational models of magma genesis and geochemical transport in subduction zones
NASA Astrophysics Data System (ADS)
Katz, R.; Spiegelman, M.
2003-04-01
The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
NASA Astrophysics Data System (ADS)
Doi, Hideo; Okuwaki, Koji; Mochizuki, Yuji; Ozawa, Taku; Yasuoka, Kenji
2017-09-01
In dissipative particle dynamics (DPD) simulations, it is necessary to use the so-called χ parameter set that express the effective interactions between particles. Recently, we have developed a new scheme to evaluate the χ parameters in a non-empirical way through a series of fragment molecular orbital (FMO) calculations. As a challenging test, we have performed the DPD simulations using the FMO-based χ parameters for a mixture of 1-Palmitoyl-2-oleoyl phosphatidyl choline (POPC) and water. The structures of both membrane and vesicle were formed successfully. The calculated structural parameters of membrane were in good agreement with experimental results.
A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons
NASA Astrophysics Data System (ADS)
Bao, J.; Lin, Z.; Lu, Z. X.
2018-02-01
A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.
A Cross-Layer, Anomaly-Based IDS for WSN and MANET
Amouri, Amar; Manthena, Raju
2018-01-01
Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks. PMID:29470446
A Cross-Layer, Anomaly-Based IDS for WSN and MANET.
Amouri, Amar; Morgera, Salvatore D; Bencherif, Mohamed A; Manthena, Raju
2018-02-22
Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks.
An efficient flexible-order model for 3D nonlinear water waves
NASA Astrophysics Data System (ADS)
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
Dual domain material point method for multiphase flows
NASA Astrophysics Data System (ADS)
Zhang, Duan
2017-11-01
Although the particle-in-cell method was first invented in the 60's for fluid computations, one of its later versions, the material point method, is mostly used for solid calculations. Recent development of the multi-velocity formulations for multiphase flows and fluid-structure interactions requires the Lagrangian capability of the method be combined with Eulerian calculations for fluids. Because of different numerical representations of the materials, additional numerical schemes are needed to ensure continuity of the materials. New applications of the method to compute fluid motions have revealed numerical difficulties in various versions of the method. To resolve these difficulties, the dual domain material point method is introduced and improved. Unlike other particle based methods, the material point method uses both Lagrangian particles and Eulerian mesh, therefore it avoids direct communication between particles. With this unique property and the Lagrangian capability of the method, it is shown that a multiscale numerical scheme can be efficiently built based on the dual domain material point method. In this talk, the theoretical foundation of the method will be introduced. Numerical examples will be shown. Work sponsored by the next generation code project of LANL.
Bond Order Conservation Strategies in Catalysis Applied to the NH 3 Decomposition Reaction
Yu, Liang; Abild-Pedersen, Frank
2016-12-14
On the basis of an extensive set of density functional theory calculations, it is shown that a simple scheme provides a fundamental understanding of variations in the transition state energies and structures of reaction intermediates on transition metal surfaces across the periodic table. The scheme is built on the bond order conservation principle and requires a limited set of input data, still achieving transition state energies as a function of simple descriptors with an error smaller than those of approaches based on linear fits to a set of calculated transition state energies. Here, we have applied this approach together withmore » linear scaling of adsorption energies to obtain the energetics of the NH 3 decomposition reaction on a series of stepped fcc(211) transition metal surfaces. Moreover, this information is used to establish a microkinetic model for the formation of N 2 and H 2, thus providing insight into the components of the reaction that determines the activity.« less
CAD scheme for detection of hemorrhages and exudates in ocular fundus images
NASA Astrophysics Data System (ADS)
Hatanaka, Yuji; Nakagawa, Toshiaki; Hayashi, Yoshinori; Mizukusa, Yutaka; Fujita, Akihiro; Kakogawa, Masakatsu; Kawase, Kazuhide; Hara, Takeshi; Fujita, Hiroshi
2007-03-01
This paper describes a method for detecting hemorrhages and exudates in ocular fundus images. The detection of hemorrhages and exudates is important in order to diagnose diabetic retinopathy. Diabetic retinopathy is one of the most significant factors contributing to blindness, and early detection and treatment are important. In this study, hemorrhages and exudates were automatically detected in fundus images without using fluorescein angiograms. Subsequently, the blood vessel regions incorrectly detected as hemorrhages were eliminated by first examining the structure of the blood vessels and then evaluating the length-to-width ratio. Finally, the false positives were eliminated by checking the following features extracted from candidate images: the number of pixels, contrast, 13 features calculated from the co-occurrence matrix, two features based on gray-level difference statistics, and two features calculated from the extrema method. The sensitivity of detecting hemorrhages in the fundus images was 85% and that of detecting exudates was 77%. Our fully automated scheme could accurately detect hemorrhages and exudates.
NASA Astrophysics Data System (ADS)
Makinistian, Leonardo; Albanesi, Eduardo A.
2013-06-01
We present ab initio calculations of magnetoelectronic and transport properties of the interface of hcp Cobalt (001) and the intrinsic narrow-gap semiconductor germanium selenide (GeSe). Using a norm-conserving pseudopotentials scheme within DFT, we first model the interface with a supercell approach and focus on the spin-resolved densities of states and the magnetic moment (spin and orbital components) at the different atomic layers that form the device. We also report a series of cuts (perpendicular to the plane of the heterojunction) of the electronic and spin densities showing a slight magnetization of the first layers of the semiconductor. Finally, we model the device with a different scheme: using semiinfinite electrodes connected to the heterojunction. These latter calculations are based upon a nonequilibrium Green's function approach that allows us to explore the spin-resolved electronic transport under a bias voltage (spin-resolved I-V curves), revealing features of potential applicability in spintronics.
XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer
NASA Astrophysics Data System (ADS)
Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar
2017-04-01
Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).
Matching the quasiparton distribution in a momentum subtraction scheme
NASA Astrophysics Data System (ADS)
Stewart, Iain W.; Zhao, Yong
2018-03-01
The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.
Research of thermionic converter collector properties in model experiments with surface control
NASA Astrophysics Data System (ADS)
Agafonov, Valerii R.; Vizgalov, Anatolii V.; Iarygin, Valerii I.
Consideration was given to a possible scheme of phenomena on electrodes leading to changes in emission properties (EP) of a thermionic converter (TEC) collector. It was based on technology and materials typical of the TOPAZ-type reactor-converter (TRC). The element composition (EC), near-surface layer (NSL) structure, and work function (WF) of a collector made from niobium-based polycrystal alloy were studied within this scheme experimentally. The influence of any media except for the interelectrode gap (IEG) medium was excluded when investigating the effect of thermovacuum treatment (TVT) as well as the influence of carbon monoxide, hydrogen, and methane on the NSL characteristics. Experimental data and analytical estimates of the impact of fission products of the nuclear fuel on collector EP are presented. The calculation of possible TRC electrical power decrease was also carried out.
Laser-phased-array beam steering based on crystal fiber
NASA Astrophysics Data System (ADS)
Yang, Deng-cai; Zhao, Si-si; Wang, Da-yong; Wang, Zhi-yong; Zhang, Xiao-fei
2011-06-01
Laser-phased-array system provides an elegant means for achieving the inertial-free, high-resolution, rapid and random beam steering. In laser-phased-array system, phase controlling is the most important factor that impacts the system performance. A novel scheme is provided in this paper, the beam steering is accomplished by using crystal fiber array, the difference length between adjacent fiber is fixed. The phase difference between adjacent fiber decides the direction of the output beam. When the wavelength of the input fiber laser is tuned, the phase difference between the adjacent elements has changed. Therefore, the laser beam direction has changed and the beam steering has been accomplished. In this article, based on the proposed scheme, the steering angle of the laser beam is calculated and analyzed theoretically. Moreover, the far-field steering beam quality is discussed.
Structure-Based Predictions of Activity Cliffs
Husby, Jarmila; Bottegoni, Giovanni; Kufareva, Irina; Abagyan, Ruben; Cavalli, Andrea
2015-01-01
In drug discovery, it is generally accepted that neighboring molecules in a given descriptors' space display similar activities. However, even in regions that provide strong predictability, structurally similar molecules can occasionally display large differences in potency. In QSAR jargon, these discontinuities in the activity landscape are known as ‘activity cliffs’. In this study, we assessed the reliability of ligand docking and virtual ligand screening schemes in predicting activity cliffs. We performed our calculations on a diverse, independently collected database of cliff-forming co-crystals. Starting from ideal situations, which allowed us to establish our baseline, we progressively moved toward simulating more realistic scenarios. Ensemble- and template-docking achieved a significant level of accuracy, suggesting that, despite the well-known limitations of empirical scoring schemes, activity cliffs can be accurately predicted by advanced structure-based methods. PMID:25918827
Theoretical investigation of the laser cooling of a LiBe molecule
NASA Astrophysics Data System (ADS)
You, Yang; Yang, Chuan-Lu; Wang, Mei-Shan; Ma, Xiao-Guang; Liu, Wen-Wang
2015-09-01
An optical scheme to create the simplest heteronuclear metal ultracold LiBe molecule is proposed based on ab initio quantum chemistry calculations. The potential energy curves, dipole moments, and transition dipole moments of 1 +2Σ , 2 +2Σ , 1 2Π , and 2 2Π states are calculated using the multireference configuration interaction and large basis sets. The analytical functions deduced from the obtained curves are used to determine the rovibrational energy levels, the Franck-Condon factors, and the Einstein coefficients of the states through solving the Schrödinger equation of nuclear movement. The spectroscopic parameters are deduced with the obtained rovibrational energy levels. The Franck-Condon factors (f00:0.998 , f11:0.986 , f22:0.920 ) for the 2 +2Σ(v =0 ) ↔1 +2Σ(v'=0 ) transition are highly diagonally distributed, and the calculated radiative lifetime (74.87 ns) of the 2 +2Σ state is found to be short enough for rapid laser cooling. The results demonstrate that LiBe could be a very promising candidate for laser cooling and a three-cycle laser cooling scheme for the molecule has been proposed.
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
Calculation of the recirculating compressible flow downstream a sudden axisymmetric expansion
NASA Technical Reports Server (NTRS)
Vandromme, D.; Haminh, H.; Brunet, H.
1988-01-01
Significant progress has been made during the last five years to adapt conventional Navier-Stokes solver for handling nonconservative equations. A primary type of application is to use transport equation turbulence models, but the extension is also possible for describing the transport of nonpassive scalars, such as in reactive media. Among others, combustion and gas dissociation phenomena are topics needing a considerable research effort. An implicit two step scheme based on the well-known MacCormack scheme has been modified to treat compressible turbulent flows on complex geometries. Implicit treatment of nonconservative equations (in the present case a two-equation turbulence model) opens the way to the coupled solution of thermochemical transport equations.
Correlated electron-nuclear dynamics with conditional wave functions.
Albareda, Guillermo; Appel, Heiko; Franco, Ignacio; Abedi, Ali; Rubio, Angel
2014-08-22
The molecular Schrödinger equation is rewritten in terms of nonunitary equations of motion for the nuclei (or electrons) that depend parametrically on the configuration of an ensemble of generally defined electronic (or nuclear) trajectories. This scheme is exact and does not rely on the tracing out of degrees of freedom. Hence, the use of trajectory-based statistical techniques can be exploited to circumvent the calculation of the computationally demanding Born-Oppenheimer potential-energy surfaces and nonadiabatic coupling elements. The concept of the potential-energy surface is restored by establishing a formal connection with the exact factorization of the full wave function. This connection is used to gain insight from a simplified form of the exact propagation scheme.
ExoCross: Spectra from molecular line lists
NASA Astrophysics Data System (ADS)
Yurchenko, Sergei N.; Al-Refaie, Ahmed; Tennyson, Jonathan
2018-03-01
ExoCross generates spectra and thermodynamic properties from molecular line lists in ExoMol, HITRAN, or several other formats. The code is parallelized and also shows a high degree of vectorization; it works with line profiles such as Doppler, Lorentzian and Voigt and supports several broadening schemes. ExoCross is also capable of working with the recently proposed method of super-lines. It supports calculations of lifetimes, cooling functions, specific heats and other properties. ExoCross converts between different formats, such as HITRAN, ExoMol and Phoenix, and simulates non-LTE spectra using a simple two-temperature approach. Different electronic, vibronic or vibrational bands can be simulated separately using an efficient filtering scheme based on the quantum numbers.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Effect of film slicks on near-surface wind
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail; Ermakov, Stanislav; Ostrovsky, Lev; Shomina, Olga
2016-09-01
The transient effects of horizontal variation of sea-surface wave roughness due to surfactant films on near-surface turbulent wind are studied theoretically and experimentally. Here we suggest two practical schemes for calculating variations of wind velocity profiles near the water surface, the average short-wave roughness of which is varying in space and time when a film slick is present. The schemes are based on a generalized two-layer model of turbulent air flow over a rough surface and on the solution of the continuous model involving the equation for turbulent kinetic energy of the air flow. Wave tank studies of wind flow over wind waves in the presence of film slicks are described and compared with theory.
Yan, Ming; Li, Wenxue; Yang, Kangwen; Zhou, Hui; Shen, Xuling; Zhou, Qian; Ru, Qitian; Bai, Dongbi; Zeng, Heping
2012-05-01
We report on a simple scheme to precisely control carrier-envelope phase of a nonlinear-polarization-rotation mode-locked self-started Yb-fiber laser system with an average output power of ∼7 W and a pulse width of 130 fs. The offset frequency was locked to the repetition rate of ∼64.5 MHz with a relative linewidth of ∼1.4 MHz by using a self-referenced feed-forward scheme based on an acousto-optic frequency shifter. The phase noise and timing jitter were calculated to be 370 mrad and 120 as, respectively.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
NASA Astrophysics Data System (ADS)
Jin, G.
2012-12-01
Multiphase flow modeling is an important numerical tool for a better understanding of transport processes in the fields including, but not limited to, petroleum reservoir engineering, remedy of ground water contamination, and risk evaluation of greenhouse gases such as CO2 injected into deep saline reservoirs. However, accurate numerical modeling for multiphase flow remains many challenges that arise from the inherent tight coupling and strong non-linear nature of the governing equations and the highly heterogeneous media. The existence of counter current flow which is caused by the effect of adverse relative mobility contrast and gravitational and capillary forces will introduce additional numerical instability. Recently multipoint flux approximation (MPFA) has become a subject of extensive research and has been demonstrated with great success in reducing considerable grid orientation effects compared to the conventional single point upstream (SPU) weighting scheme, especially in higher dimensions. However, the present available MPFA schemes are mathematically targeted to certain types of grids in two dimensions, a more general form of MPFA scheme is needed for both 2-D and 3-D problems. In this work a new upstream weighting scheme based on multipoint directional incoming fluxes is proposed which incorporates full permeability tensor to account for the heterogeneity of the porous media. First, the multiphase governing equations are decoupled into an elliptic pressure equation and a hyperbolic or parabolic saturation depends on whether the gravitational and capillary pressures are presented or not. Next, a dual secondary grid (called finite volume grid) is formulated from a primary grid (called finite element grid) to create interaction regions for each grid cell over the entire simulation domain. Such a discretization must ensure the conservation of mass and maintain the continuity of the Darcy velocity across the boundaries between neighboring interaction regions. The pressure field is then implicitly calculated from the pressure equation, which in turn results in the derived velocity field for directional flux calculation at each grid node. Directional flux at the center of each interaction surface is also calculated by interpolation from the element nodal fluxes using shape functions. The MPFA scheme is performed by a specific linear combination of all incoming fluxes into the upstream cell represented by either nodal fluxes or interpolated surface boundary fluxes to produce an upwind directional fluxed weighted relative mobility at the center of the interaction region boundary. Such an upwind weighted relative mobility is then used for calculating the saturations of each fluid phase explicitly. The proposed upwind weighting scheme has been implemented into a mixed finite element-finite volume (FE-FV) method, which allows for handling complex reservoir geometry with second-order accuracies in approximating primary variables. The numerical solver has been tested with several bench mark test problems. The application of the proposed scheme to migration path analysis of CO2 injected into deep saline reservoirs in 3-D has demonstrated its ability and robustness in handling multiphase flow with adverse mobility contrast in highly heterogeneous porous media.
Atmospheric parameterization schemes for satellite cloud property retrieval during FIRE IFO 2
NASA Technical Reports Server (NTRS)
Titlow, James; Baum, Bryan A.
1993-01-01
Satellite cloud retrieval algorithms generally require atmospheric temperature and humidity profiles to determine such cloud properties as pressure and height. For instance, the CO2 slicing technique called the ratio method requires the calculation of theoretical upwelling radiances both at the surface and a prescribed number (40) of atmospheric levels. This technique has been applied to data from, for example, the High Resolution Infrared Radiometer Sounder (HIRS/2, henceforth HIRS) flown aboard the NOAA series of polar orbiting satellites and the High Resolution Interferometer Sounder (HIS). In this particular study, four NOAA-11 HIRS channels in the 15-micron region are used. The ratio method may be applied to various channel combinations to estimate cloud top heights using channels in the 15-mu m region. Presently, the multispectral, multiresolution (MSMR) scheme uses 4 HIRS channel combination estimates for mid- to high-level cloud pressure retrieval and Advanced Very High Resolution Radiometer (AVHRR) data for low-level (is greater than 700 mb) cloud level retrieval. In order to determine theoretical upwelling radiances, atmospheric temperature and water vapor profiles must be provided as well as profiles of other radiatively important gas absorber constituents such as CO2, O3, and CH4. The assumed temperature and humidity profiles have a large effect on transmittance and radiance profiles, which in turn are used with HIRS data to calculate cloud pressure, and thus cloud height and temperature. For large spatial scale satellite data analysis, atmospheric parameterization schemes for cloud retrieval algorithms are usually based on a gridded product such as that provided by the European Center for Medium Range Weather Forecasting (ECMWF) or the National Meteorological Center (NMC). These global, gridded products prescribe temperature and humidity profiles for a limited number of pressure levels (up to 14) in a vertical atmospheric column. The FIRE IFO 2 experiment provides an opportunity to investigate current atmospheric profile parameterization schemes, compare satellite cloud height results using both gridded products (ECMWF) and high vertical resolution sonde data from the National Weather Service (NWS) and Cross Chain Loran Atmospheric Sounding System (CLASS), and suggest modifications in atmospheric parameterization schemes based on these results.
A Godunov-like point-centered essentially Lagrangian hydrodynamic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.
We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less
A Godunov-like point-centered essentially Lagrangian hydrodynamic approach
Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.; ...
2014-10-28
We present an essentially Lagrangian hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedron meshes. The scheme reduces to a purely Lagrangian approach when the flow is linear or if the mesh size is equal to zero; as a result, we use the term essentially Lagrangian for the proposed approach. The motivation for developing a hydrodynamic method for tetrahedron meshes is because tetrahedron meshes have some advantages over other mesh topologies. Notable advantages include reduced complexity in generating conformal meshes, reduced complexity in mesh reconnection, and preserving tetrahedron cells with automatic mesh refinement. A challenge, however, is tetrahedron meshesmore » do not correctly deform with a lower order (i.e. piecewise constant) staggered-grid hydrodynamic scheme (SGH) or with a cell-centered hydrodynamic (CCH) scheme. The SGH and CCH approaches calculate the strain via the tetrahedron, which can cause artificial stiffness on large deformation problems. To resolve the stiffness problem, we adopt the point-centered hydrodynamic approach (PCH) and calculate the evolution of the flow via an integration path around the node. The PCH approach stores the conserved variables (mass, momentum, and total energy) at the node. The evolution equations for momentum and total energy are discretized using an edge-based finite element (FE) approach with linear basis functions. A multidirectional Riemann-like problem is introduced at the center of the tetrahedron to account for discontinuities in the flow such as a shock. Conservation is enforced at each tetrahedron center. The multidimensional Riemann-like problem used here is based on Lagrangian CCH work [8, 19, 37, 38, 44] and recent Lagrangian SGH work [33-35, 39, 45]. In addition, an approximate 1D Riemann problem is solved on each face of the nodal control volume to advect mass, momentum, and total energy. The 1D Riemann problem produces fluxes [18] that remove a volume error in the PCH discretization. A 2-stage Runge–Kutta method is used to evolve the solution in time. The details of the new hydrodynamic scheme are discussed; likewise, results from numerical test problems are presented.« less
Lin, Hai; Zhao, Yan; Tishchenko, Oksana; Truhlar, Donald G
2006-09-01
The multiconfiguration molecular mechanics (MCMM) method is a general algorithm for generating potential energy surfaces for chemical reactions by fitting high-level electronic structure data with the help of molecular mechanical (MM) potentials. It was previously developed as an extension of standard MM to reactive systems by inclusion of multidimensional resonance interactions between MM configurations corresponding to specific valence bonding patterns, with the resonance matrix element obtained from quantum mechanical (QM) electronic structure calculations. In particular, the resonance matrix element is obtained by multidimensional interpolation employing a finite number of geometries at which electronic-structure calculations of the energy, gradient, and Hessian are carried out. In this paper, we present a strategy for combining MCMM with hybrid quantum mechanical molecular mechanical (QM/MM) methods. In the new scheme, electronic-structure information for obtaining the resonance integral is obtained by means of hybrid QM/MM calculations instead of fully QM calculations. As such, the new strategy can be applied to the studies of very large reactive systems. The new MCMM scheme is tested for two hydrogen-transfer reactions. Very encouraging convergence is obtained for rate constants including tunneling, suggesting that the new MCMM method, called QM/MM-MCMM, is a very general, stable, and efficient procedure for generating potential energy surfaces for large reactive systems. The results are found to converge well with respect to the number of Hessians. The results are also compared to calculations in which the resonance integral data are obtained by pure QM, and this illustrates the sensitivity of reaction rate calculations to the treatment of the QM-MM border. For the smaller of the two systems, comparison is also made to direct dynamics calculations in which the potential energies are computed quantum mechanically on the fly.
Analysis of reaction schemes using maximum rates of constituent steps
Motagamwala, Ali Hussain; Dumesic, James A.
2016-01-01
We show that the steady-state kinetics of a chemical reaction can be analyzed analytically in terms of proposed reaction schemes composed of series of steps with stoichiometric numbers equal to unity by calculating the maximum rates of the constituent steps, rmax,i, assuming that all of the remaining steps are quasi-equilibrated. Analytical expressions can be derived in terms of rmax,i to calculate degrees of rate control for each step to determine the extent to which each step controls the rate of the overall stoichiometric reaction. The values of rmax,i can be used to predict the rate of the overall stoichiometric reaction, making it possible to estimate the observed reaction kinetics. This approach can be used for catalytic reactions to identify transition states and adsorbed species that are important in controlling catalyst performance, such that detailed calculations using electronic structure calculations (e.g., density functional theory) can be carried out for these species, whereas more approximate methods (e.g., scaling relations) are used for the remaining species. This approach to assess the feasibility of proposed reaction schemes is exact for reaction schemes where the stoichiometric coefficients of the constituent steps are equal to unity and the most abundant adsorbed species are in quasi-equilibrium with the gas phase and can be used in an approximate manner to probe the performance of more general reaction schemes, followed by more detailed analyses using full microkinetic models to determine the surface coverages by adsorbed species and the degrees of rate control of the elementary steps. PMID:27162366
Analysis of reaction schemes using maximum rates of constituent steps
Motagamwala, Ali Hussain; Dumesic, James A.
2016-05-09
In this paper, we show that the steady-state kinetics of a chemical reaction can be analyzed analytically in terms of proposed reaction schemes composed of series of steps with stoichiometric numbers equal to unity by calculating the maximum rates of the constituent steps, r max,i, assuming that all of the remaining steps are quasi-equilibrated. Analytical expressions can be derived in terms of r max,i to calculate degrees of rate control for each step to determine the extent to which each step controls the rate of the overall stoichiometric reaction. The values of r max,i can be used to predict themore » rate of the overall stoichiometric reaction, making it possible to estimate the observed reaction kinetics. This approach can be used for catalytic reactions to identify transition states and adsorbed species that are important in controlling catalyst performance, such that detailed calculations using electronic structure calculations (e.g., density functional theory) can be carried out for these species, whereas more approximate methods (e.g., scaling relations) are used for the remaining species. Finally, this approach to assess the feasibility of proposed reaction schemes is exact for reaction schemes where the stoichiometric coefficients of the constituent steps are equal to unity and the most abundant adsorbed species are in quasi-equilibrium with the gas phase and can be used in an approximate manner to probe the performance of more general reaction schemes, followed by more detailed analyses using full microkinetic models to determine the surface coverages by adsorbed species and the degrees of rate control of the elementary steps.« less
Boson expansion theory in the seniority scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamura, T.; Li, C.; Pedrocchi, V.G.
1985-12-01
A boson expansion formalism in the seniority scheme is presented and its relation with number-conserving quasiparticle calculations is elucidated. Accuracy and convergence are demonstrated numerically. A comparative discussion with other related approaches is given.
Approximate treatment of semicore states in GW calculations with application to Au clusters.
Xian, Jiawei; Baroni, Stefano; Umari, P
2014-03-28
We address the treatment of transition metal atoms in GW electronic-structure calculations within the plane-wave pseudo-potential formalism. The contributions of s and p semi-core electrons to the self-energy, which are essential to grant an acceptable accuracy, are dealt with using a recently proposed scheme whereby the exchange components are treated exactly at the G0W0 level, whereas a suitable approximation to the correlation components is devised. This scheme is benchmarked for small gold nano-clusters, resulting in ionization potentials, electron affinities, and density of states in very good agreement with those obtained from calculations where s and p semicore states are treated as valence orbitals, and allowing us to apply this same scheme to clusters of intermediate size, Au20 and Au32, that would be otherwise very difficult to deal with.
Quantum Monte Carlo calculations of NiO
NASA Astrophysics Data System (ADS)
Maezono, Ryo; Towler, Mike D.; Needs, Richard. J.
2008-03-01
We describe variational and diffusion quantum Monte Carlo (VMC and DMC) calculations [1] of NiO using a 1024-electron simulation cell. We have used a smooth, norm-conserving, Dirac-Fock pseudopotential [2] in our work. Our trial wave functions were of Slater-Jastrow form, containing orbitals generated in Gaussian-basis UHF periodic calculations. Jastrow factor is optimized using variance minimization with optimized cutoff lengths using the same scheme as our previous work. [4] We apply the lattice regulated scheme [5] to evaluate non-local pseudopotentials in DMC and find the scheme improves the smoothness of the energy-volume curve. [1] CASINO ver.2.1 User Manual, University of Cambridge (2007). [2] J.R. Trail et.al., J. Chem. Phys. 122, 014112 (2005). [3] CRYSTAL98 User's Manual, University of Torino (1998). [4] Ryo Maezono et.al., Phys. Rev. Lett., 98, 025701 (2007). [5] Michele Casula, Phys. Rev. B 74, 161102R (2006).
Approximate treatment of semicore states in GW calculations with application to Au clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Jiawei; Baroni, Stefano; CNR-IOM Democritos, Theory-Elettra group, Trieste
We address the treatment of transition metal atoms in GW electronic-structure calculations within the plane-wave pseudo-potential formalism. The contributions of s and p semi-core electrons to the self-energy, which are essential to grant an acceptable accuracy, are dealt with using a recently proposed scheme whereby the exchange components are treated exactly at the G{sub 0}W{sub 0} level, whereas a suitable approximation to the correlation components is devised. This scheme is benchmarked for small gold nano-clusters, resulting in ionization potentials, electron affinities, and density of states in very good agreement with those obtained from calculations where s and p semicore statesmore » are treated as valence orbitals, and allowing us to apply this same scheme to clusters of intermediate size, Au{sub 20} and Au{sub 32}, that would be otherwise very difficult to deal with.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Lan, E-mail: chenglanster@gmail.com; Stopkowicz, Stella, E-mail: stella.stopkowicz@kjemi.uio.no; Gauss, Jürgen, E-mail: gauss@uni-mainz.de
A perturbative approach to compute second-order spin-orbit (SO) corrections to a spin-free Dirac-Coulomb Hartree-Fock (SFDC-HF) calculation is suggested. The proposed scheme treats the difference between the DC and SFDC Hamiltonian as perturbation and exploits analytic second-derivative techniques. In addition, a cost-effective scheme for incorporating relativistic effects in high-accuracy calculations is suggested consisting of a SFDC coupled-cluster treatment augmented by perturbative SO corrections obtained at the HF level. Benchmark calculations for the hydrogen halides HX, X = F-At as well as the coinage-metal fluorides CuF, AgF, and AuF demonstrate the accuracy of the proposed perturbative treatment of SO effects on energiesmore » and electrical properties in comparison with the more rigorous full DC treatment. Furthermore, we present, as an application of our scheme, results for the electrical properties of AuF and XeAuF.« less
Multi-objective optimization of radiotherapy: distributed Q-learning and agent-based simulation
NASA Astrophysics Data System (ADS)
Jalalimanesh, Ammar; Haghighi, Hamidreza Shahabi; Ahmadi, Abbas; Hejazian, Hossein; Soltani, Madjid
2017-09-01
Radiotherapy (RT) is among the regular techniques for the treatment of cancerous tumours. Many of cancer patients are treated by this manner. Treatment planning is the most important phase in RT and it plays a key role in therapy quality achievement. As the goal of RT is to irradiate the tumour with adequately high levels of radiation while sparing neighbouring healthy tissues as much as possible, it is a multi-objective problem naturally. In this study, we propose an agent-based model of vascular tumour growth and also effects of RT. Next, we use multi-objective distributed Q-learning algorithm to find Pareto-optimal solutions for calculating RT dynamic dose. We consider multiple objectives and each group of optimizer agents attempt to optimise one of them, iteratively. At the end of each iteration, agents compromise the solutions to shape the Pareto-front of multi-objective problem. We propose a new approach by defining three schemes of treatment planning created based on different combinations of our objectives namely invasive, conservative and moderate. In invasive scheme, we enforce killing cancer cells and pay less attention about irradiation effects on normal cells. In conservative scheme, we take more care of normal cells and try to destroy cancer cells in a less stressed manner. The moderate scheme stands in between. For implementation, each of these schemes is handled by one agent in MDQ-learning algorithm and the Pareto optimal solutions are discovered by the collaboration of agents. By applying this methodology, we could reach Pareto treatment plans through building different scenarios of tumour growth and RT. The proposed multi-objective optimisation algorithm generates robust solutions and finds the best treatment plan for different conditions.
NASA Astrophysics Data System (ADS)
Yankovskii, A. P.
2017-12-01
Based on a stepwise algorithm involving central finite differences for the approximation in time, a mathematical model is developed for elastoplastic deformation of cross-reinforced plates with isotropically hardening materials of components of the composition. The model allows obtaining the solution of elastoplastic problems at discrete points in time by an explicit scheme. The initial boundary value problem of the dynamic behavior of flexible plates reinforced in their own plane is formulated in the von Kármán approximation with allowance for their weakened resistance to the transverse shear. With a common approach, the resolving equations corresponding to two variants of the Timoshenko theory are obtained. An explicit "cross" scheme for numerical integration of the posed initial boundary value problem has been constructed. The scheme is consistent with the incremental algorithm used for simulating the elastoplastic behavior of a reinforced medium. Calculations of the dynamic behavior have been performed for elastoplastic cylindrical bending of differently reinforced fiberglass rectangular elongated plates. It is shown that the reinforcement structure significantly affects their elastoplastic dynamic behavior. It has been found that the classical theory of plates is as a rule unacceptable for carrying out the required calculations (except for very thin plates), and the first version of the Timoshenko theory yields reasonable results only in cases of relatively thin constructions reinforced by lowmodulus fibers. Proceeding from the results of the work, it is recommended to use the second variant of the Timoshenko theory (as a more accurate one) for calculations of the elastoplastic behavior of reinforced plates.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
NASA Astrophysics Data System (ADS)
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2015-10-01
The Thompson cloud microphysics scheme is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Thompson scheme incorporates a large number of improvements. Thus, we have optimized the speed of this important part of WRF. Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the Thompson microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. New optimizations for an updated Thompson scheme are discusses in this paper. The optimizations improved the performance of the original Thompson code on Xeon Phi 7120P by a factor of 1.8x. Furthermore, the same optimizations improved the performance of the Thompson on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 1.8x compared to the original Thompson code.
Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus
2018-04-01
Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.
Hagiwara, Yohsuke; Tateno, Masaru
2010-10-20
We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd
Outgassing rate analysis of a velvet cathode and a carbon fiber cathode
NASA Astrophysics Data System (ADS)
Li, An-Kun; Fan, Yu-Wei; Qian, Bao-Liang; Zhang, Zi-cheng; Xun, Tao
2017-11-01
In this paper, the outgassing-rates of a carbon fiber array cathode and a polymer velvet cathode are tested and discussed. Two different methods of measurements are used in the experiments. In one scheme, a method based on dynamic equilibrium of pressure is used. Namely, the cathode works in the repetitive mode in a vacuum diode, a dynamic equilibrium pressure would be reached when the outgassing capacity in the chamber equals the pumping capacity of the pump, and the outgassing rate could be figured out according to this equilibrium pressure. In another scheme, a method based on static equilibrium of pressure is used. Namely, the cathode works in a closed vacuum chamber (a hard tube), and the outgassing rate could be calculated from the pressure difference between the pressure in the chamber before and after the work of the cathode. The outgassing rate is analyzed from the real time pressure evolution data which are measured using a magnetron gauge in both schemes. The outgassing rates of the carbon fiber array cathode and the velvet cathode are 7.3 ± 0.4 neutrals/electron and 85 ± 5 neutrals/electron in the first scheme and 9 ± 0.5 neutrals/electron and 98 ± 7 neutrals/electron in the second scheme. Both the results of two schemes show that the outgassing rate of the carbon fiber array cathode is an order smaller than that of the velvet cathode under similar conditions, which shows that this carbon fiber array cathode is a promising replacement of the velvet cathode in the application of magnetically insulated transmission line oscillators and relativistic magnetrons.
Wang, Jiajun; Li, Xiaoting; You, Ya; Xintong, Yang; Wang, Ying; Li, Qunxiang
2018-06-21
Mimicking the natural photosynthesis in green plants, artificial Z-scheme photocatalysis enables more efficient utilization of solar energy for photocatalytic water splitting. Most currently designed g-C3N4-based Z-scheme heterojunctions are usually based on metal-containing semiconductor photocatalysts, thus exploiting metal-free photocatalysts for Z-scheme water splitting is of huge interest. Herein, we propose two metal-free C3N/g-C3N4 heterojunctions with the C3N monolayer covering g-C3N4 sheet (monolayer or bilayer) and systematically explore their electronic structures, charge distributions and photocatalytic properties by performing extensive hybrid density functional calculations. We clearly reveal that the relative strong built-in electric fields around their respective interface regions, caused by the charge transfer from C3N monolayer to g-C3N4 monolayer or bilayer, result in the bands bending, renders the transfer of photogenerated carriers in these two heterojunctions following the Z-scheme instead of the type-II pathway. Moreover, the photogenerated electrons and holes in these two C3N/g-C3N4 heterojunctions not only can be efficiently separated, but also have strong redox abilities for water oxidation and reduction. Compared with the isolated g-C3N4 sheets, the light absorption in visible to near-infrared region are significantly enhanced in these proposed heterojunctions. These theoretical findings suggest that these proposed metal-free C3N/g-C3N4 heterojunctions are promising direct Z-scheme photocatalysts for solar water splitting. © 2018 IOP Publishing Ltd.
Patched-grid calculations with the Euler and Navier-Stokes equations: Theory and applications
NASA Technical Reports Server (NTRS)
Rai, M. M.
1986-01-01
A patched-grid approach is one in which the flow region of interest is divided into subregions which are then discretized independently using existing grid generator. The equations of motion are integrated in each subregion in conjunction with patch-boundary schemes which allow proper information transfer across interfaces that separate subregions. The patched-grid approach greatly simplifies the treatment of complex geometries and also the addition of grid points to selected regions of the flow. A conservative patch-boundary condition that can be used with explicit, implicit factored and implicit relaxation schemes is described. Several example calculations that demonstrate the capabilities of the patched-grid scheme are also included.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
NASA Astrophysics Data System (ADS)
Zhou, C.; Zhang, X.; Gong, S.; Wang, Y.; Xue, M.
2016-01-01
A comprehensive aerosol-cloud-precipitation interaction (ACI) scheme has been developed under a China Meteorological Administration (CMA) chemical weather modeling system, GRAPES/CUACE (Global/Regional Assimilation and PrEdiction System, CMA Unified Atmospheric Chemistry Environment). Calculated by a sectional aerosol activation scheme based on the information of size and mass from CUACE and the thermal-dynamic and humid states from the weather model GRAPES at each time step, the cloud condensation nuclei (CCN) are interactively fed online into a two-moment cloud scheme (WRF Double-Moment 6-class scheme - WDM6) and a convective parameterization to drive cloud physics and precipitation formation processes. The modeling system has been applied to study the ACI for January 2013 when several persistent haze-fog events and eight precipitation events occurred.
The results show that aerosols that interact with the WDM6 in GRAPES/CUACE obviously increase the total cloud water, liquid water content, and cloud droplet number concentrations, while decreasing the mean diameters of cloud droplets with varying magnitudes of the changes in each case and region. These interactive microphysical properties of clouds improve the calculation of their collection growth rates in some regions and hence the precipitation rate and distributions in the model, showing 24 to 48 % enhancements of threat score for 6 h precipitation in almost all regions. The aerosols that interact with the WDM6 also reduce the regional mean bias of temperature by 3 °C during certain precipitation events, but the monthly means bias is only reduced by about 0.3 °C.
Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...
2015-06-26
The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
About the preliminary design of the suspension spring and shock absorber
NASA Astrophysics Data System (ADS)
Preda, I.
2016-08-01
The aim of this paper is to give some recommendation for the design of main-spring and shock absorber of motor vehicle suspensions. Starting from a 2DoF model, the suspension parameters are transferred on the real vehicle on the base of planar schemes for the linkage. For the coil spring, the equations that must be fulfilled simultaneously permit to calculate three geometrical parameters. The indications presented for the shock absorber permit to obtain the damping coefficients in the compression and rebound strokes and to calculate the power dissipated during the vehicle oscillatory movement.
Brueckner-AMD Study of Light Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Kiyoshi; Yamamoto, Yuhei; Togashi, Tomoaki
2011-06-28
We applied the Brueckner theory to the Antisymmetrized Molecular Dynamics (AMD) and examined the reliability of the AMD calculations based on realistic nuclear interactions. In this method, the Bethe-Goldstone equation in the Brueckner theory is solved for every nucleon pair described by wave packets of AMD, and the G-matrix is calculated with single-particle orbits in AMD self-consistently. We apply this framework to not only {alpha}-nuclei but also N{ne}Z nuclei with A{approx}10. It is confirmed that these results present the description of reasonable cluster structures and energy-level schemes comparable with the experimental ones in light nuclei.
The coupled three-dimensional wave packet approach to reactive scattering
NASA Astrophysics Data System (ADS)
Marković, Nikola; Billing, Gert D.
1994-01-01
A recently developed scheme for time-dependent reactive scattering calculations using three-dimensional wave packets is applied to the D+H2 system. The present method is an extension of a previously published semiclassical formulation of the scattering problem and is based on the use of hyperspherical coordinates. The convergence requirements are investigated by detailed calculations for total angular momentum J equal to zero and the general applicability of the method is demonstrated by solving the J=1 problem. The inclusion of the geometric phase is also discussed and its effect on the reaction probability is demonstrated.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
Contribution of the Recent AUSM Schemes to the Overflow Code: Implementation and Validation
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Buning, Pieter G.
2000-01-01
We shall present results of a recent collaborative effort between the authors attempting to implement the numerical flux scheme, AUSM+ and its new developments, into a widely used NASA code, OVERFLOW. This paper is intended to give a thorough and systematic documentation about the solutions of default test cases using the AUSNI+ scheme. Hence we will address various aspects of numerical solutions, such as accuracy, convergence rate, and effects of turbulence models, over a variety of geometries, speed regimes. We will briefly describe the numerical schemes employed in the calculations, including the capability of solving for low-speed flows and multiphase flows by employing the concept of numerical speed of sound. As a bonus, this low Mach number formulations also enhances convergence to steady solutions for flows even at transonic speed. Calculations for complex 3D turbulent flows were performed with several turbulence models and the results display excellent agreements with measured data.
Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
NASA Astrophysics Data System (ADS)
Wong, M. L. Dennis; Goh, Antionette W.-T.; Chua, Hong Siang
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
A non-axisymmetric linearized supersonic wave drag analysis: Mathematical theory
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1996-01-01
A Mathematical theory is developed to perform the calculations necessary to determine the wave drag for slender bodies of non-circular cross section. The derivations presented in this report are based on extensions to supersonic linearized small perturbation theory. A numerical scheme is presented utilizing Fourier decomposition to compute the pressure coefficient on and about a slender body of arbitrary cross section.
Ion Channel Conductance Measurements on a Silicon-Based Platform
2006-01-01
calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
TTLEM - an implicit-explicit (IMEX) scheme for modelling landscape evolution in MATLAB
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang
2016-04-01
Landscape evolution models (LEM) are essential to unravel interdependent earth surface processes. They are proven very useful to bridge several temporal and spatial timescales and have been successfully used to integrate existing empirical datasets. There is a growing consensus that landscapes evolve at least as much in the horizontal as in the vertical direction urging for an efficient implementation of dynamic drainage networks. Here we present a spatially explicit LEM, which is based on the object-oriented function library TopoToolbox 2 (Schwanghart and Scherler, 2014). Similar to other LEMs, rivers are considered to be the main drivers for simulated landscape evolution as they transmit pulses of tectonic perturbations and set the base level of surrounding hillslopes. Highly performant graph algorithms facilitate efficient updates of the flow directions to account for planform changes in the river network and the calculation of flow-related terrain attributes. We implement the model using an implicit-explicit (IMEX) scheme, i.e. different integrators are used for different terms in the diffusion-incision equation. While linear diffusion is solved using an implicit scheme, we calculate incision explicitly. Contrary to previously published LEMS, however, river incision is solved using a total volume method which is total variation diminishing in order to prevent numerical diffusion when solving the stream power law (Campforts and Govers, 2015). We show that the use of this updated numerical scheme alters both landscape topography and catchment wide erosion rates at a geological time scale. Finally, the availability of a graphical user interface facilitates user interaction, making the tool very useful both for research and didactical purposes. References Campforts, B., Govers, G., 2015. Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law. J. Geophys. Res. Earth Surf. 120, 1189-1205. doi:10.1002/2014JF003376 Schwanghart, W., Scherler, D., 2014. TopoToolbox 2 - MATLAB-based software for topographic analysis and modeling in Earth surface sciences. Earth Surf. Dyn. 2, 1-7. doi:10.5194/esurf-2-1-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil
2013-08-15
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.
Treecode-based generalized Born method
NASA Astrophysics Data System (ADS)
Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao
2011-02-01
We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.
Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton
2013-08-15
Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.
An efficient method for quantum transport simulations in the time domain
NASA Astrophysics Data System (ADS)
Wang, Y.; Yam, C.-Y.; Frauenheim, Th.; Chen, G. H.; Niehaus, T. A.
2011-11-01
An approximate method based on adiabatic time dependent density functional theory (TDDFT) is presented, that allows for the description of the electron dynamics in nanoscale junctions under arbitrary time dependent external potentials. The density matrix of the device region is propagated according to the Liouville-von Neumann equation. The semi-infinite leads give rise to dissipative terms in the equation of motion which are calculated from first principles in the wide band limit. In contrast to earlier ab initio implementations of this formalism, the Hamiltonian is here approximated in the spirit of the density functional based tight-binding (DFTB) method. Results are presented for two prototypical molecular devices and compared to full TDDFT calculations. The temporal profile of the current traces is qualitatively well captured by the DFTB scheme. Steady state currents show considerable variations, both in comparison of approximate and full TDDFT, but also among TDDFT calculations with different basis sets.
Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Marinica, Mihai-Cosmin
2018-03-01
The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
Cost accounting and public reimbursement schemes in Spanish hospitals.
Sánchez-Martínez, Fernando; Abellán-Perpiñán, José-María; Martínez-Pérez, Jorge-Eduardo; Puig-Junoy, Jaume
2006-08-01
The objective of this paper is to provide a description and analysis of the main costing and pricing (reimbursement) systems employed by hospitals in the Spanish National Health System (NHS). Hospitals cost calculations are mostly based on a full costing approach as opposite to other systems like direct costing or activity based costing. Regional and hospital differences arise on the method used to allocate indirect costs to cost centres and also on the approach used to measure resource consumption. Costs are typically calculated by disaggregating expenditure and allocating it to cost centres, and then to patients and DRGs. Regarding public reimbursement systems, the impression is that unit costs are ignored, except for certain type of high technology processes and treatments.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
Analysis of Three-dimension Viscous Flow in the Model Axial Compressor Stage K1002L
NASA Astrophysics Data System (ADS)
Tribunskaia, K.; Kozhukhov, Y. V.
2017-08-01
The main investigation subject considered in this paper is axial compressor model stage K1002L. Three simulation models were designed: Scheme 1 - inlet stage model consisting of IGV (Inlet Guide Vane), rotor and diffuser; Scheme 2 - two-stage model: IGV, first-stage rotor, first-stage diffuser, second-stage rotor, EGV (Exit Guide Vane); Scheme 3 - full-round model: IGV, rotor, diffuser. Numerical investigation of the model stage was held for four circumferential velocities at the outer diameter (Uout=125,160,180,210 m/s) within the range of flow coefficient: ϕ = 0.4 - 0.6. The computational domain was created with ANSYS CFX Workbench. According to simulation results, there were constructed aerodynamic characteristic curves of adiabatic efficiency and the adiabatic head coefficient calculated for total parameters were compared with data from the full-scale test received at the Central Boiler and Turbine Institution (CBTI), thus, verification of the calculated data was carried out. Moreover, there were conducted the following studies: comparison of aerodynamic characteristics of the schemes 1, 2; comparison of the sector and full-round models. The analysis and conclusions are supplemented by gas-dynamic method calculation for axial compressor stages.
Sodt, Alexander J; Mei, Ye; König, Gerhard; Tao, Peng; Steele, Ryan P; Brooks, Bernard R; Shao, Yihan
2015-03-05
In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton-Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completely avoided at each configuration. They produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin-luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy.
Simulating Freshwater Availability under Future Climate Conditions
NASA Astrophysics Data System (ADS)
Zhao, F.; Zeng, N.; Motesharrei, S.; Gustafson, K. C.; Rivas, J.; Miralles-Wilhelm, F.; Kalnay, E.
2013-12-01
Freshwater availability is a key factor for regional development. Precipitation, evaporation, river inflow and outflow are the major terms in the estimate of regional water supply. In this study, we aim to obtain a realistic estimate for these variables from 1901 to 2100. First we calculated the ensemble mean precipitation using the 2011-2100 RCP4.5 output (re-sampled to half-degree spatial resolution) from 16 General Circulation Models (GCMs) participating the Coupled Model Intercomparison Project Phase 5 (CMIP5). The projections are then combined with the half-degree 1901-2010 Climate Research Unit (CRU) TS3.2 dataset after bias correction. We then used the combined data to drive our UMD Earth System Model (ESM), in order to generate evaporation and runoff. We also developed a River-Routing Scheme based on the idea of Taikan Oki, as part of the ESM. It is capable of calculating river inflow and outflow for any region, driven by the gridded runoff output. River direction and slope information from Global Dominant River Tracing (DRT) dataset are included in our scheme. The effects of reservoirs/dams are parameterized based on a few simple factors such as soil moisture, population density and geographic regions. Simulated river flow is validated with river gauge measurements for the world's major rivers. We have applied our river flow calculation to two data-rich watersheds in the United States: Phoenix AMA watershed and the Potomac River Basin. The results are used in our SImple WAter model (SIWA) to explore water management options.
NASA Astrophysics Data System (ADS)
Fradi, Aniss
The ability to allocate the active power (MW) loading on transmission lines and transformers, is the basis of the "flow based" transmission allocation system developed by the North American Electric Reliability Council. In such a system, the active power flows must be allocated to each line or transformer in proportion to the active power being transmitted by each transaction imposed on the system. Currently, this is accomplished through the use of the linear Power Transfer Distribution Factors (PTDFs). Unfortunately, no linear allocation models exist for other energy transmission quantities, such as MW and MVAR losses, MVAR and MVA flows, etc. Early allocation schemes were developed to allocate MW losses due to transactions to branches in a transmission system, however they exhibited diminished accuracy, since most of them are based on linear power flow modeling of the transmission system. This thesis presents a new methodology to calculate Energy Transaction Allocation factors (ETA factors, or eta factors), using the well-known process of integration of a first derivative function, as well as consistent and well-established mathematical and AC power flow models. The factors give a highly accurate allocation of any non-linear system quantity to transactions placed on the transmission system. The thesis also extends the new ETA factors calculation procedure to restructure a new economic dispatch scheme where multiple sets of generators are economically dispatched to meet their corresponding load and their share of the losses.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
NASA Astrophysics Data System (ADS)
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
Communication: Charge-population based dispersion interactions for molecules and materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stöhr, Martin; Department Chemie, Technische Universität München, Lichtenbergstr. 4, D-85748 Garching; Michelitsch, Georg S.
2016-04-21
We introduce a system-independent method to derive effective atomic C{sub 6} coefficients and polarizabilities in molecules and materials purely from charge population analysis. This enables the use of dispersion-correction schemes in electronic structure calculations without recourse to electron-density partitioning schemes and expands their applicability to semi-empirical methods and tight-binding Hamiltonians. We show that the accuracy of our method is en par with established electron-density partitioning based approaches in describing intermolecular C{sub 6} coefficients as well as dispersion energies of weakly bound molecular dimers, organic crystals, and supramolecular complexes. We showcase the utility of our approach by incorporation of the recentlymore » developed many-body dispersion method [Tkatchenko et al., Phys. Rev. Lett. 108, 236402 (2012)] into the semi-empirical density functional tight-binding method and propose the latter as a viable technique to study hybrid organic-inorganic interfaces.« less
nuMap: A Web Platform for Accurate Prediction of Nucleosome Positioning
Alharbi, Bader A.; Alshammari, Thamir H.; Felton, Nathan L.; Zhurkin, Victor B.; Cui, Feng
2014-01-01
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945
nuMap: a web platform for accurate prediction of nucleosome positioning.
Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng
2014-10-01
Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. Copyright © 2014 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.
Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S
2014-01-01
In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
A Minimal Three-Dimensional Tropical Cyclone Model.
NASA Astrophysics Data System (ADS)
Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang
2001-07-01
A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.
Calculations of turbulent separated flows
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of incompressible turbulent separated flows is carried out by using two-equation turbulence models of the K-epsilon type. On the basis of realizability analysis, a new formulation of the eddy-viscosity is proposed which ensures the positiveness of turbulent normal stresses - a realizability condition that most existing two-equation turbulence models are unable to satisfy. The present model is applied to calculate two backward-facing step flows. Calculations with the standard K-epsilon model and a recently developed RNG-based K-epsilon model are also made for comparison. The calculations are performed with a finite-volume method. A second-order accurate differencing scheme and sufficiently fine grids are used to ensure the numerical accuracy of solutions. The calculated results are compared with the experimental data for both mean and turbulent quantities. The comparison shows that the present model performs quite well for separated flows.
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-28
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation
NASA Astrophysics Data System (ADS)
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-01
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Kohlhase, Naja; Näppi, Janne J.; Hironaka, Toru; Ota, Junko; Ishida, Takayuki; Regge, Daniele; Yoshida, Hiroyuki
2016-03-01
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
NASA Astrophysics Data System (ADS)
Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping
2017-01-01
In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.
Gyrokinetic Magnetohydrodynamics and the Associated Equilibrium
NASA Astrophysics Data System (ADS)
Lee, W. W.; Hudson, S. R.; Ma, C. H.
2017-10-01
A proposed scheme for the calculations of gyrokinetic MHD and its associated equilibrium is discussed related a recent paper on the subject. The scheme is based on the time-dependent gyrokinetic vorticity equation and parallel Ohm's law, as well as the associated gyrokinetic Ampere's law. This set of equations, in terms of the electrostatic potential, ϕ, and the vector potential, ϕ , supports both spatially varying perpendicular and parallel pressure gradients and their associated currents. The MHD equilibrium can be reached when ϕ -> 0 and A becomes constant in time, which, in turn, gives ∇ . (J|| +J⊥) = 0 and the associated magnetic islands. Examples in simple cylindrical geometry will be given. The present work is partially supported by US DoE Grant DE-AC02-09CH11466.
Numeric and fluid dynamic representation of tornadic double vortex thunderstorms
NASA Technical Reports Server (NTRS)
Connell, J. R.; Marquart, E. J.; Frost, W.; Boaz, W.
1980-01-01
Current understanding of a double vortex thunderstorm involves a pair of contra-rotating vortices that exists in the dynamic updraft. The pair is believed to be a result of a blocking effect which occurs when a cylindrical thermal updraft of a thunderstorm protrudes into the upper level air and there is a large amount of vertical wind shear between the low level and upper level air layers. A numerical tornado prediction scheme based on the double vortex thunderstorm was developed. The Energy-Shear Index (ESI) is part of the scheme and is calculated from radiosonde measurements. The ESI incorporates parameters representative of thermal instability and blocking effect, and indicates appropriate environments for which the development of double vortex thunderstorms is likely.
Pure-type superconducting permanent-magnet undulator.
Tanaka, Takashi; Tsuru, Rieko; Kitamura, Hideo
2005-07-01
A novel synchrotron radiation source is proposed that utilizes bulk-type high-temperature superconductors (HTSCs) as permanent magnets (PMs) by in situ magnetization. Arrays of HTSC blocks magnetized by external magnetic fields are placed below and above the electron path instead of conventional PMs, generating a periodic magnetic field with an offset. Two methods are presented to magnetize the HTSCs and eliminate the field offset, enabling the HTSC arrays to work as a synchrotron radiation source. An analytical formula to calculate the peak field achieved in a device based on this scheme is derived in a two-dimensional form for comparison with synchrotron radiation sources using conventional PMs. Experiments were performed to demonstrate the principle of the proposed scheme and the results have been found to be very promising.
Electronic properties of Laves phase ZrFe{sub 2} using Compton spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatt, Samir, E-mail: sameerbhatto11@gmail.com; Kumar, Kishor; Ahuja, B. L.
First-ever experimental Compton profile of Laves phase ZrFe{sub 2}, using indigenous 20 Ci {sup 137}Cs Compton spectrometer, is presented. To analyze the experimental electron momentum density, we have deduced the theoretical Compton profiles using density functional theory (DFT) and hybridization of DFT and Hartree-Fock scheme within linear combination of atomic orbitals (LCAO) method. The energy bands and density of states are also calculated using LCAO prescription. The theoretical profile based on local density approximation gives a better agreement with the experimental profile than other reported schemes. The present investigations validate the inclusion of correlation potential of Perdew-Zunger in predicting themore » electronic properties of ZrFe{sub 2}.« less
Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals.
Duchemin, Ivan; Li, Jing; Blase, Xavier
2017-03-14
The introduction of auxiliary bases to approximate molecular orbital products has paved the way to significant savings in the evaluation of four-center two-electron Coulomb integrals. We present a generalized dual space strategy that sheds a new light on variants over the standard density and Coulomb-fitting schemes, including the possibility of introducing minimization constraints. We improve in particular the charge- or multipole-preserving strategies introduced respectively by Baerends and Van Alsenoy that we compare to a simple scheme where the Coulomb metric is used for lowest angular momentum auxiliary orbitals only. We explore the merits of these approaches on the basis of extensive Hartree-Fock and MP2 calculations over a standard set of medium size molecules.
Thermodynamics of surface defects at the aspirin/water interface
NASA Astrophysics Data System (ADS)
Schneider, Julian; Zheng, Chen; Reuter, Karsten
2014-09-01
We present a simulation scheme to calculate defect formation free energies at a molecular crystal/water interface based on force-field molecular dynamics simulations. To this end, we adopt and modify existing approaches to calculate binding free energies of biological ligand/receptor complexes to be applicable to common surface defects, such as step edges and kink sites. We obtain statistically accurate and reliable free energy values for the aspirin/water interface, which can be applied to estimate the distribution of defects using well-established thermodynamic relations. As a show case we calculate the free energy upon dissolving molecules from kink sites at the interface. This free energy can be related to the solubility concentration and we obtain solubility values in excellent agreement with experimental results.
Fast and efficient indexing approach for object recognition
NASA Astrophysics Data System (ADS)
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Lin, Kyaw Kyaw; Soe, Aung Kyaw; Thu, Theint Theint
2008-10-01
This research work investigates a Self-Tuning Proportional Derivative (PD) type Fuzzy Logic Controller (STPDFLC) for a two link robot system. The proposed scheme adjusts on-line the output Scaling Factor (SF) by fuzzy rules according to the current trend of the robot. The rule base for tuning the output scaling factor is defined on the error (e) and change in error (de). The scheme is also based on the fact that the controller always tries to manipulate the process input. The rules are in the familiar if-then format. All membership functions for controller inputs (e and de) and controller output (UN) are defined on the common interval [-1,1]; whereas the membership functions for the gain updating factor (α) is defined on [0,1]. There are various methods to calculate the crisp output of the system. Center of Gravity (COG) method is used in this application due to better results it gives. Performances of the proposed STPDFLC are compared with those of their corresponding PD-type conventional Fuzzy Logic Controller (PDFLC). The proposed scheme shows a remarkably improved performance over its conventional counterpart especially under parameters variation (payload). The two-link results of analysis are simulated. These simulation results are illustrated by using MATLAB® programming.
NASA Astrophysics Data System (ADS)
Zhou, C.; Zhang, X.; Gong, S.
2015-12-01
A comprehensive aerosol-cloud-precipitation interaction (ACI) scheme has been developed under CMA chemical weather modeling system GRAPES/CUACE. Calculated by a sectional aerosol activation scheme based on the information of size and mass from CUACE and the thermal-dynamic and humid states from the weather model GRAPES at each time step, the cloud condensation nuclei (CCN) is fed online interactively into a two-moment cloud scheme (WDM6) and a convective parameterization to drive the cloud physics and precipitation formation processes. The modeling system has been applied to study the ACI for January 2013 when several persistent haze-fog events and eight precipitation events occurred. The results show that interactive aerosols with the WDM6 in GRAPES/CUACE obviously increase the total cloud water, liquid water content and cloud droplet number concentrations while decrease the mean diameter of cloud droplets with varying magnitudes of the changes in each case and region. These interactive micro-physical properties of clouds improve the calculation of their collection growth rates in some regions and hence the precipitation rate and distributions in the model, showing 24% to 48% enhancements of TS scoring for 6-h precipitation in almost all regions. The interactive aerosols with the WDM6 also reduce the regional mean bias of temperature by 3 °C during certain precipitation events, but the monthly means bias is only reduced by about 0.3°C.
NASA Technical Reports Server (NTRS)
Nowottnick, E.
2007-01-01
During August 2006, the NASA African Multidisciplinary Analyses Mission (NAMMA) field experiment was conducted to characterize the structure of African Easterly Waves and their evolution into tropical storms. Mineral dust aerosols affect tropical storm development, although their exact role remains to be understood. To better understand the role of dust on tropical cyclogenesis, we have implemented a dust source, transport, and optical model in the NASA Goddard Earth Observing System (GEOS) atmospheric general circulation model and data assimilation system. Our dust source scheme is more physically based scheme than previous incarnations of the model, and we introduce improved dust optical and microphysical processes through inclusion of a detailed microphysical scheme. Here we use A-Train observations from MODIS, OMI, and CALIPSO with NAMMA DC-8 flight data to evaluate the simulated dust distributions and microphysical properties. Our goal is to synthesize the multi-spectral observations from the A-Train sensors to arrive at a consistent set of optical properties for the dust aerosols suitable for direct forcing calculations.
Rodrigues, Joel J. P. C.
2014-01-01
This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
The scheme and research of TV series multidimensional comprehensive evaluation on cross-platform
NASA Astrophysics Data System (ADS)
Chai, Jianping; Bai, Xuesong; Zhou, Hongjun; Yin, Fulian
2016-10-01
As for shortcomings of the comprehensive evaluation system on traditional TV programs such as single data source, ignorance of new media as well as the high time cost and difficulty of making surveys, a new evaluation of TV series is proposed in this paper, which has a perspective in cross-platform multidimensional evaluation after broadcasting. This scheme considers the data directly collected from cable television and the Internet as research objects. It's based on TOPSIS principle, after preprocessing and calculation of the data, they become primary indicators that reflect different profiles of the viewing of TV series. Then after the process of reasonable empowerment and summation by the six methods(PCA, AHP, etc.), the primary indicators form the composite indices on different channels or websites. The scheme avoids the inefficiency and difficulty of survey and marking; At the same time, it not only reflects different dimensions of viewing, but also combines TV media and new media, completing the multidimensional comprehensive evaluation of TV series on cross-platform.
NASA Technical Reports Server (NTRS)
Dobrzynski, W.
1984-01-01
Amiet's correction scheme for sound wave transmission through shear-layers is extended to incorporate the additional effects of different temperatures in the flow-field in the surrounding medium at rest. Within a parameter-regime typical for acoustic measurements in wind tunnels amplitude- and angle-correction is calculated and plotted systematically to provide a data base for the test engineer.
Classifying stages of cirrus life-cycle evolution
NASA Astrophysics Data System (ADS)
Urbanek, Benedikt; Groß, Silke; Schäfler, Andreas; Wirth, Martin
2018-04-01
Airborne lidar backscatter data is used to determine in- and out-of-cloud regions. Lidar measurements of water vapor together with model temperature fields are used to calculate relative humidity over ice (RHi). Based on temperature and RHi we identify different stages of cirrus evolution: homogeneous and heterogeneous freezing, depositional growth, ice sublimation and sedimentation. We will present our classification scheme and first applications on mid-latitude cirrus clouds.
NASA Astrophysics Data System (ADS)
Xu, Jianhui; Shu, Hong
2014-09-01
This study assesses the analysis performance of assimilating the Moderate Resolution Imaging Spectroradiometer (MODIS)-based albedo and snow cover fraction (SCF) separately or jointly into the physically based Common Land Model (CoLM). A direct insertion method (DI) is proposed to assimilate the black and white-sky albedos into the CoLM. The MODIS-based albedo is calculated with the MODIS bidirectional reflectance distribution function (BRDF) model parameters product (MCD43B1) and the solar zenith angle as estimated in the CoLM for each time step. Meanwhile, the MODIS SCF (MOD10A1) is assimilated into the CoLM using the deterministic ensemble Kalman filter (DEnKF) method. A new DEnKF-albedo assimilation scheme for integrating the DI and DEnKF assimilation schemes is proposed. Our assimilation results are validated against in situ snow depth observations from November 2008 to March 2009 at five sites in the Altay region of China. The experimental results show that all three data assimilation schemes can improve snow depth simulations. But overall, the DEnKF-albedo assimilation shows the best analysis performance as it significantly reduces the bias and root-mean-square error (RMSE) during the snow accumulation and ablation periods at all sites except for the Fuyun site. The SCF assimilation via DEnKF produces better results than the albedo assimilation via DI, implying that the albedo assimilation that indirectly updates the snow depth state variable is less efficient than the direct SCF assimilation. For the Fuyun site, the DEnKF-albedo scheme tends to overestimate the snow depth accumulation with the maximum bias and RMSE values because of the large positive innovation (observation minus forecast).
NASA Astrophysics Data System (ADS)
Séférian, Roland; Baek, Sunghye; Boucher, Olivier; Dufresne, Jean-Louis; Decharme, Bertrand; Saint-Martin, David; Roehrig, Romain
2018-01-01
Ocean surface represents roughly 70 % of the Earth's surface, playing a large role in the partitioning of the energy flow within the climate system. The ocean surface albedo (OSA) is an important parameter in this partitioning because it governs the amount of energy penetrating into the ocean or reflected towards space. The old OSA schemes in the ARPEGE-Climat and LMDZ models only resolve the latitudinal dependence in an ad hoc way without an accurate representation of the solar zenith angle dependence. Here, we propose a new interactive OSA scheme suited for Earth system models, which enables coupling between Earth system model components like surface ocean waves and marine biogeochemistry. This scheme resolves spectrally the various contributions of the surface for direct and diffuse solar radiation. The implementation of this scheme in two Earth system models leads to substantial improvements in simulated OSA. At the local scale, models using the interactive OSA scheme better replicate the day-to-day distribution of OSA derived from ground-based observations in contrast to old schemes. At global scale, the improved representation of OSA for diffuse radiation reduces model biases by up to 80 % over the tropical oceans, reducing annual-mean model-data error in surface upwelling shortwave radiation by up to 7 W m-2 over this domain. The spatial correlation coefficient between modeled and observed OSA at monthly resolution has been increased from 0.1 to 0.8. Despite its complexity, this interactive OSA scheme is computationally efficient for enabling precise OSA calculation without penalizing the elapsed model time.
An active monitoring method for flood events
NASA Astrophysics Data System (ADS)
Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya
2018-07-01
Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.
NASA Astrophysics Data System (ADS)
Saniz, R.; Xu, Y.; Matsubara, M.; Amini, M. N.; Dixit, H.; Lamoen, D.; Partoens, B.
2013-01-01
The calculation of defect levels in semiconductors within a density functional theory approach suffers greatly from the band gap problem. We propose a band gap correction scheme that is based on the separation of energy differences in electron addition and relaxation energies. We show that it can predict defect levels with a reasonable accuracy, particularly in the case of defects with conduction band character, and yet is simple and computationally economical. We apply this method to ZnO doped with group III elements (Al, Ga, In). As expected from experiment, the results indicate that Zn substitutional doping is preferred over interstitial doping in Al, Ga, and In-doped ZnO, under both zinc-rich and oxygen-rich conditions. Further, all three dopants act as shallow donors, with the +1 charge state having the most advantageous formation energy. Also, doping effects on the electronic structure of ZnO are sufficiently mild so as to affect little the fundamental band gap and lowest conduction bands dispersion, which secures their n-type transparent conducting behavior. A comparison with the extrapolation method based on LDA+U calculations and with the Heyd-Scuseria-Ernzerhof hybrid functional (HSE) shows the reliability of the proposed scheme in predicting the thermodynamic transition levels in shallow donor systems.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less
NASA Astrophysics Data System (ADS)
Brandt, C.; Thakur, S. C.; Tynan, G. R.
2016-04-01
Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.
WinClastour—a Visual Basic program for tourmaline formula calculation and classification
NASA Astrophysics Data System (ADS)
Yavuz, Fuat; Yavuz, Vural; Sasmaz, Ahmet
2006-10-01
WinClastour is a Microsoft ® Visual Basic 6.0 program that enables the user to enter and calculate structural formulae of tourmaline analyses obtained both by the electron-microprobe or wet-chemical analyses. It is developed to predict cation site-allocations at the different structural positions, as well as to estimate mole percent of the end-members of the calcic-, alkali-, and X-site vacant group tourmalines. Using the different normalization schemes, such as 24.5 oxygens, 31 anions, 15 cations ( T+ Z+ Y), and 6 silicons, the present program classifies tourmaline data based on the classification scheme proposed by Hawthorne and Henry [1999. Classification of the minerals of the tourmaline group. European Journal of Mineralogy 11, 201-215]. The present program also enables the user Al-Mg disorder between Y and Z sites. WinClastour stores all the calculated results in a comma-delimited ASCII file format. Hence, output of the program can be displayed and processed by any other software for general data manipulation and graphing purposes. The compiled program code together with a test data file and related graphic files, which are designed to produce a high-quality printout from the Grapher program of Golden Software, is approximately 3 Mb as a self-extracting setup file.
HARM: A Numerical Scheme for General Relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Gammie, Charles, F.; McKinney, Jonathan C.; Tóth, Gábor
2012-09-01
HARM uses a conservative, shock-capturing scheme for evolving the equations of general relativistic magnetohydrodynamics. The fluxes are calculated using the Harten, Lax, & van Leer scheme. A variant of constrained transport, proposed earlier by Tóth, is used to maintain a divergence-free magnetic field. Only the covariant form of the metric in a coordinate basis is required to specify the geometry. On smooth flows HARM converges at second order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Jinmei; Arritt, R.W.
The importance of land-atmosphere interactions and biosphere in climate change studies has long been recognized, and several land-atmosphere interaction schemes have been developed. Among these, the Simple Biosphere scheme (SiB) of Sellers et al. and the Biosphere Atmosphere Transfer Scheme (BATS) of Dickinson et al. are two of the most widely known. The effects of GCM subgrid-scale inhomogeneities of surface properties in general circulation models also has received increasing attention in recent years. However, due to the complexity of land surface processes and the difficulty to prescribe the large number of parameters that determine atmospheric and soil interactions with vegetation,more » many previous studies and results seem to be contradictory. A GCM grid element typically represents an area of 10{sup 4}-10{sup 6} km{sup 2}. Within such an area, there exist variations of soil type, soil wetness, vegetation type, vegetation density and topography, as well as urban areas and water bodies. In this paper, we incorporate both BATS and SiB2 land surface process schemes into a nonhydrostatic, compressible version of AMBLE model (Atmospheric Model -- Boundary-Layer Emphasis), and compare the surface heat fluxes and mesoscale circulations calculated using the two schemes. 8 refs., 5 figs.« less
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Future changes in precipitation of the baiu season under RCP scenarios
NASA Astrophysics Data System (ADS)
Okada, Y.; Takemi, T.; Ishikawa, H.
2014-12-01
Recently, the relationship between global warming and rainfall during the rainy season, which called the baiu in Japan, has been attracting attention in association with heavy rainfall in this period. In the Innovative Program of Climate Change Projection for the 21st Century, many studies show a delay in the northward march of the baiu front, and significant increase of daily precipitation amounts around western Japan during the late baiu season (e.g., Kusunoki et al. 2011, Kanada et al. 2012). The future climate experiment in these studies was performed under the IPCC SRES A1B scenarios for global warming conditions. In this study, we discuss the future changes in precipitation using calculated 60km-mesh model (MRI-AGCM3.2H) under Representative Concentration Pathways (RCP) scenarios. Support of this dataset is provided by the Meteorological Research Institute (MRI). These dataset are calculated by setting the Yoshimura (YS) scheme mainly.Seasonal progression of future precipitation generally indicates the northward in RCP2.6 and 4.5 scenarios, around western Japan. In RCP6.0 scenario, precipitation intensity is weak compared to the other scenarios. RCP8.5 scenario is calculated by setting three different cumulus schemes (YS, Arakawa-Schubert (AS), and Kain-Fritsch (KF) schemes). RCP8.5 configured in YS scheme showed that the rainband associated with the baiu front is not clear. Moreover, peak is remarkable during late June. In AS scheme, the precipitation area stagnates around 30 N until August. And it in KF scheme shows gradual northward migration.This work was conducted under the Program for Risk Information on Climate Change supported by the Ministry of Education, Culture, Sports, Science, and Technology-Japan (MEXT).
Triaxiality and Exotic Rotations at High Spins in 134Ce
Petrache, C. M.; Guo, S.; Ayangeakaa, A. D.; ...
2016-06-06
High-spin states in Ce-134 have been investigated using the Cd-116(Ne-22,4n) reaction and the Gammasphere array. The level scheme has been extended to an excitation energy of similar to 30 MeV and spin similar to 54 (h) over bar. Two new dipole bands and four new sequences of quadrupole transitions were identified. Several new transitions have been added to a number of known bands. One of the strongly populated dipole bands was revised and placed differently in the level scheme, resolving a discrepancy between experiment and model calculations reported previously. Configurations are assigned to the observed bands based on cranked Nilsson-Strutinskymore » calculations. A coherent understanding of the various excitations, both at low and high spins, is thus obtained, supporting an interpretation in terms of coexistence of stable triaxial, highly deformed, and superdeformed shapes up to very high spins. Rotations around different axes of the triaxial nucleus, and sudden changes of the rotation axis in specific configurations, are identified, further elucidating the nature of high-spin collective excitations in the A = 130 mass region.« less
West Antarctic Balance Fluxes: Impact of Smoothing, Algorithm and Topography.
NASA Astrophysics Data System (ADS)
Le Brocq, A.; Payne, A. J.; Siegert, M. J.; Bamber, J. L.
2004-12-01
Grid-based calculations of balance flux and velocity have been widely used to understand the large-scale dynamics of ice masses and as indicators of their state of balance. This research investigates a number of issues relating to their calculation for the West Antarctic Ice Sheet (see below for further details): 1) different topography smoothing techniques; 2) different grid based flow-apportioning algorithms; 3) the source of the flow direction, whether from smoothed topography, or smoothed gravitational driving stress; 4) different flux routing techniques and 5) the impact of different topographic datasets. The different algorithms described below lead to significant differences in both ice stream margins and values of fluxes within them. This encourages caution in the use of grid-based balance flux/velocity distributions and values, especially when considering the state of balance of individual ice streams. 1) Most previous calculations have used the same numerical scheme (Budd and Warner, 1996) applied to a smoothed topography in order to incorporate the longitudinal stresses that smooth ice flow. There are two options to consider when smoothing the topography, the size of the averaging filter and the shape of the averaging function. However, this is not a physically-based approach to incorporating smoothed ice flow and also introduces significant flow artefacts when using a variable weighting function. 2) Different algorithms to apportion flow are investigated; using 4 or 8 neighbours, and apportioning flow to all down-slope cells or only 2 (based on derived flow direction). 3) A theoretically more acceptable approach of incorporating smoothed ice flow is to use the smoothed gravitational driving stress in x and y components to derive a flow direction. The flux can then be apportioned using the flow direction approach used above. 4) The original scheme (Budd and Warner, 1996) uses an elevation sort technique to calculate the balance flux contribution from all cells to each individual cell. However, elevation sort is only successful when ice cannot flow uphill. Other possible techniques include using a recursive call for each neighbour or using a sparse matrix solution. 5) Two digital elevation models are used as input data, which have significant differences in coastal and mountainous areas and therefore lead to different calculations. Of particular interest is the difference in the Rutford Ice Stream/Carlson Inlet and Kamb Ice Stream (Ice Stream C) fluxes.
NASA Astrophysics Data System (ADS)
Bower, Keith; Choularton, Tom; Latham, John; Sahraei, Jalil; Salter, Stephen
2006-11-01
A simplified version of the model of marine stratocumulus clouds developed by Bower, Jones and Choularton [Bower, K.N., Jones, A., and Choularton, T.W., 1999. A modeling study of aerosol processing by stratocumulus clouds and its impact on GCM parameterisations of cloud and aerosol. Atmospheric Research, Vol. 50, Nos. 3-4, The Great Dun Fell Experiment, 1995-special issue, 317-344.] was used to examine the sensitivity of the albedo-enhancement global warming mitigation scheme proposed by Latham [Latham, J., 1990. Control of global warming? Nature 347, 339-340; Latham, J., 2002. Amelioration of global warming by controlled enhancement of the albedo and longevity of low-level maritime clouds. Atmos. Sci. Letters (doi:10.1006/Asle.2002.0048).] to the cloud and environmental aerosol characteristics, as well as those of the seawater aerosol of salt-mass ms and number concentration Δ N, which-under the scheme-are advertently introduced into the clouds. Values of albedo-change Δ A and droplet number concentration Nd were calculated for a wide range of values of ms, Δ N, updraught speed W, cloud thickness Δ Z and cloud-base temperature TB: for three measured aerosol spectra, corresponding to ambient air of negligible, moderate and high levels of pollution. Our choices of parameter value ranges were determined by the extent of their applicability to the mitigation scheme, whose current formulation is still somewhat preliminary, thus rendering unwarranted in this study the utilisation of refinements incorporated into other stratocumulus models. In agreement with earlier studies: (1) Δ A was found to be very sensitive to Δ N and (within certain constraints) insensitive to changes in ms, W, Δ Z and TB; (2) Δ A was greatest for clouds formed in pure air and least for highly polluted air. In many situations considered to be within the ambit of the mitigation scheme, the calculated Δ A values exceeded those estimated by earlier workers as being necessary to produce a cooling sufficient to compensate, globally, for the warming resulting from a doubling of the atmospheric carbon dioxide concentration. Our calculations provide quantitative support for the physical viability of the mitigation scheme and offer new insights into its technological requirements.
Li, Zhengqiang; Li, Kaitao; Li, Li; Xu, Hua; Xie, Yisong; Ma, Yan; Li, Donghui; Goloub, Philippe; Yuan, Yinlin; Zheng, Xiaobing
2018-02-10
Polarization observation of sky radiation is the frontier approach to improve the remote sensing of atmospheric components, e.g., aerosol and clouds. The polarization calibration of the ground-based Sun-sky radiometer is the basis for obtaining accurate degree of linear polarization (DOLP) measurement. In this paper, a DOLP calibration method based on a laboratory polarized light source (POLBOX) is introduced in detail. Combined with the CE318-DP Sun-sky polarized radiometer, a calibration scheme for DOLP measurement is established for the spectral range of 440-1640 nm. Based on the calibration results of the Sun-sky radiometer observation network, the polarization calibration coefficient and the DOLP calibration residual are analyzed statistically. The results show that the DOLP residual of the calibration scheme is about 0.0012, and thus it can be estimated that the final DOLP calibration accuracy of this method is about 0.005. Finally, it is verified that the accuracy of the calibration results is in accordance with the expected results by comparing the simulated DOLP with the vector radiative transfer calculations.
NASA Astrophysics Data System (ADS)
Wang, Wenkai; Li, Husheng; Sun, Yan(Lindsay); Han, Zhu
2009-12-01
Cognitive radio is a revolutionary paradigm to migrate the spectrum scarcity problem in wireless networks. In cognitive radio networks, collaborative spectrum sensing is considered as an effective method to improve the performance of primary user detection. For current collaborative spectrum sensing schemes, secondary users are usually assumed to report their sensing information honestly. However, compromised nodes can send false sensing information to mislead the system. In this paper, we study the detection of untrustworthy secondary users in cognitive radio networks. We first analyze the case when there is only one compromised node in collaborative spectrum sensing schemes. Then we investigate the scenario that there are multiple compromised nodes. Defense schemes are proposed to detect malicious nodes according to their reporting histories. We calculate the suspicious level of all nodes based on their reports. The reports from nodes with high suspicious levels will be excluded in decision-making. Compared with existing defense methods, the proposed scheme can effectively differentiate malicious nodes and honest nodes. As a result, it can significantly improve the performance of collaborative sensing. For example, when there are 10 secondary users, with the primary user detection rate being equal to 0.99, one malicious user can make the false alarm rate [InlineEquation not available: see fulltext.] increase to 72%. The proposed scheme can reduce it to 5%. Two malicious users can make [InlineEquation not available: see fulltext.] increase to 85% and the proposed scheme reduces it to 8%.
Predicting p Ka values from EEM atomic charges
2013-01-01
The acid dissociation constant p Ka is a very important molecular property, and there is a strong interest in the development of reliable and fast methods for p Ka prediction. We have evaluated the p Ka prediction capabilities of QSPR models based on empirical atomic charges calculated by the Electronegativity Equalization Method (EEM). Specifically, we collected 18 EEM parameter sets created for 8 different quantum mechanical (QM) charge calculation schemes. Afterwards, we prepared a training set of 74 substituted phenols. Additionally, for each molecule we generated its dissociated form by removing the phenolic hydrogen. For all the molecules in the training set, we then calculated EEM charges using the 18 parameter sets, and the QM charges using the 8 above mentioned charge calculation schemes. For each type of QM and EEM charges, we created one QSPR model employing charges from the non-dissociated molecules (three descriptor QSPR models), and one QSPR model based on charges from both dissociated and non-dissociated molecules (QSPR models with five descriptors). Afterwards, we calculated the quality criteria and evaluated all the QSPR models obtained. We found that QSPR models employing the EEM charges proved as a good approach for the prediction of p Ka (63% of these models had R2 > 0.9, while the best had R2 = 0.924). As expected, QM QSPR models provided more accurate p Ka predictions than the EEM QSPR models but the differences were not significant. Furthermore, a big advantage of the EEM QSPR models is that their descriptors (i.e., EEM atomic charges) can be calculated markedly faster than the QM charge descriptors. Moreover, we found that the EEM QSPR models are not so strongly influenced by the selection of the charge calculation approach as the QM QSPR models. The robustness of the EEM QSPR models was subsequently confirmed by cross-validation. The applicability of EEM QSPR models for other chemical classes was illustrated by a case study focused on carboxylic acids. In summary, EEM QSPR models constitute a fast and accurate p Ka prediction approach that can be used in virtual screening. PMID:23574978
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-10-01
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
NASA Astrophysics Data System (ADS)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-11-01
Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.
NASA Astrophysics Data System (ADS)
Moon, B.; Moon, C.-B.; Odahara, A.; Lozeva, R.; Söderström, P.-A.; Browne, F.; Yuan, C.; Yagi, A.; Hong, B.; Jung, H. S.; Lee, P.; Lee, C. S.; Nishimura, S.; Doornenbal, P.; Lorusso, G.; Sumikama, T.; Watanabe, H.; Kojouharov, I.; Isobe, T.; Baba, H.; Sakurai, H.; Daido, R.; Fang, Y.; Nishibata, H.; Patel, Z.; Rice, S.; Sinclair, L.; Wu, J.; Xu, Z. Y.; Yokoyama, R.; Kubo, T.; Inabe, N.; Suzuki, H.; Fukuda, N.; Kameda, D.; Takeda, H.; Ahn, D. S.; Shimizu, Y.; Murai, D.; Bello Garrote, F. L.; Daugas, J. M.; Didierjean, F.; Ideguchi, E.; Ishigaki, T.; Morimoto, S.; Niikura, M.; Nishizuka, I.; Komatsubara, T.; Kwon, Y. K.; Tshoo, K.
2017-07-01
We report for the first time the β -decay scheme of 140Te (Z =52 ) to 140I (Z =53 ), with a specific focus on the Gamow-Teller strength along N =87 isotones. These results were obtained in an experiment performed at the Radioactive Ion Beam Factory (RIBF), RIKEN, where the parent nuclide, 140Te, was produced through the in-flight fission of a 238U beam at 345 MeV per nucleon impinging on a 9Be target. Based on data from the high-efficiency γ -ray spectrometer, EUROBALL-RIKEN Cluster Array (EURICA), we constructed a decay scheme of 140I. The half-life of 140Te has been determined to be 350(5) ms. A level at 926 keV has been assigned as a (1+) state based on the logf t value of 4.89(6). This (1+) state, commonly observed in odd-odd nuclei, can be interpreted in terms of the π h11 /2ν h9 /2 configuration formed by the Gamow-Teller transition between a neutron in the h9 /2 orbital and a proton in the h11 /2 orbital. We observe a sharp contrast to this type of β -decay branching to the lower-lying 1+ states between 140I and 136I, where we see a large reduction as the number of neutrons increases. This is in contrast to the prediction by large-scale shell model calculations. To investigate this type of the suppression, results of the Nilsson model calculations will be discussed. Along the isotones with N =87 , we discuss a characteristic feature of the Gamow-Teller distributions at 1+ states with respect to the isospin difference.
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
A strong shock tube problem calculated by different numerical schemes
NASA Astrophysics Data System (ADS)
Lee, Wen Ho; Clancy, Sean P.
1996-05-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-06-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-01-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments), to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time-symmetric suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux-limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.
2012-08-01
We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.
Anharmonic, dimensionality and size effects in phonon transport
NASA Astrophysics Data System (ADS)
Thomas, Iorwerth O.; Srivastava, G. P.
2017-12-01
We have developed and employed a numerically efficient semi- ab initio theory, based on density-functional and relaxation-time schemes, to examine anharmonic, dimensionality and size effects in phonon transport in three- and two-dimensional solids of different crystal symmetries. Our method uses third- and fourth-order terms in crystal Hamiltonian expressed in terms of a temperature-dependent Grüneisen’s constant. All input to numerical calculations are generated from phonon calculations based on the density-functional perturbation theory. It is found that four-phonon processes make important and measurable contribution to lattice thermal resistivity above the Debye temperature. From our numerical results for bulk Si, bulk Ge, bulk MoS2 and monolayer MoS2 we find that the sample length dependence of phonon conductivity is significantly stronger in low-dimensional solids.
2015-01-01
In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton–Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completely avoided at each configuration. They produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin–luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy. PMID:25321186
NASA Astrophysics Data System (ADS)
Liu, Junzi; Cheng, Lan
2018-04-01
An atomic mean-field (AMF) spin-orbit (SO) approach within exact two-component theory (X2C) is reported, thereby exploiting the exact decoupling scheme of X2C, the one-electron approximation for the scalar-relativistic contributions, the mean-field approximation for the treatment of the two-electron SO contribution, and the local nature of the SO interactions. The Hamiltonian of the proposed SOX2CAMF scheme comprises the one-electron X2C Hamiltonian, the instantaneous two-electron Coulomb interaction, and an AMF SO term derived from spherically averaged Dirac-Coulomb Hartree-Fock calculations of atoms; no molecular relativistic two-electron integrals are required. Benchmark calculations for bond lengths, harmonic frequencies, dipole moments, and electric-field gradients for a set of diatomic molecules containing elements across the periodic table show that the SOX2CAMF scheme offers a balanced treatment for SO and scalar-relativistic effects and appears to be a promising candidate for applications to heavy-element containing systems. SOX2CAMF coupled-cluster calculations of molecular properties for bismuth compounds (BiN, BiP, BiF, BiCl, and BiI) are also presented and compared with experimental results to further demonstrate the accuracy and applicability of the SOX2CAMF scheme.
Thermodynamic evaluation of transonic compressor rotors using the finite volume approach
NASA Technical Reports Server (NTRS)
Moore, J.; Nicholson, S.; Moore, J. G.
1985-01-01
Research at NASA Lewis Research Center gave the opportunity to incorporate new control volumes in the Denton 3-D finite-volume time marching code. For duct flows, the new control volumes require no transverse smoothing and this allows calculations with large transverse gradients in properties without significant numerical total pressure losses. Possibilities for improving the Denton code to obtain better distributions of properties through shocks were demonstrated. Much better total pressure distributions through shocks are obtained when the interpolated effective pressure, needed to stabilize the solution procedure, is used to calculate the total pressure. This simple change largely eliminates the undershoot in total pressure down-stream of a shock. Overshoots and undershoots in total pressure can then be further reduced by a factor of 10 by adopting the effective density method, rather than the effective pressure method. Use of a Mach number dependent interpolation scheme for pressure then removes the overshoot in static pressure downstream of a shock. The stability of interpolation schemes used for the calculation of effective density is analyzed and a Mach number dependent scheme is developed, combining the advantages of the correct perfect gas equation for subsonic flow with the stability of 2-point and 3-point interpolation schemes for supersonic flow.
Sodt, Alexander J.; Mei, Ye; Konig, Gerhard; ...
2014-10-16
In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton–Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completelymore » avoided at each configuration. Here, they produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin–luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy.« less
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Pulsed excitation terahertz tomography - multiparametric approach
NASA Astrophysics Data System (ADS)
Lopato, Przemyslaw
2018-04-01
This article deals with pulsed excitation terahertz computed tomography (THz CT). Opposite to x-ray CT, where just a single value (pixel) is obtained, in case of pulsed THz CT the time signal is acquired for each position. Recorded waveform can be parametrized - many features carrying various information about examined structure can be calculated. Based on this, multiparametric reconstruction algorithm was proposed: inverse Radon transform based reconstruction is applied for each parameter and then fusion of results is utilized. Performance of the proposed imaging scheme was experimentally verified using dielectric phantoms.
Zhang, Yequn; Djordjevic, Ivan B; Gao, Xin
2012-08-01
Inspired by recent demonstrations of orbital angular momentum-(OAM)-based single-photon communications, we propose two quantum-channel models: (i) the multidimensional quantum-key distribution model and (ii) the quantum teleportation model. Both models employ operator-sum representation for Kraus operators derived from OAM eigenkets transition probabilities. These models are highly important for future development of quantum-error correction schemes to extend the transmission distance and improve date rates of OAM quantum communications. By using these models, we calculate corresponding quantum-channel capacities in the presence of atmospheric turbulence.
Application of a symmetric total variation diminishing scheme to aerodynamics of rotors
NASA Astrophysics Data System (ADS)
Usta, Ebru
2002-09-01
The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.
Towards a Definition of Basic Numeracy
ERIC Educational Resources Information Center
Girling, Michael
1977-01-01
The author redefines basic numeracy as the ability to use a four-function calculator sensibly. He then defines "sensibly" and considers the place of algorithms in the scheme of mathematical calculations. (MN)
NASA Astrophysics Data System (ADS)
Tsujimoto, Kumiko; Homma, Koki; Koike, Toshio; Ohta, Tetsu
2013-04-01
A coupled model of a distributed hydrological model and a rice growth model was developed in this study. The distributed hydrological model used in this study is the Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) developed by Wang et al. (2009). This model includes a modified SiB2 (Simple Biosphere Model, Sellers et al., 1996) and the Geomorphology-Based Hydrological Model (GBHM) and thus it can physically calculate both water and energy fluxes. The rice growth model used in this study is the Simulation Model for Rice-Weather relations (SIMRIW) - rainfed developed by Homma et al. (2009). This is an updated version of the original SIMRIW (Horie et al., 1987) and can calculate rice growth by considering the yield reduction due to water stress. The purpose of the coupling is the integration of hydrology and crop science to develop a tool to support decision making 1) for determining the necessary agricultural water resources and 2) for allocating limited water resources to various sectors. The efficient water use and optimal water allocation in the agricultural sector are necessary to balance supply and demand of limited water resources. In addition, variations in available soil moisture are the main reasons of variations in rice yield. In our model, soil moisture and the Leaf Area Index (LAI) are calculated inside SIMRIW-rainfed so that these variables can be simulated dynamically and more precisely based on the rice than the more general calculations is the original WEB-DHM. At the same time by coupling SIMRIW-rainfed with WEB-DHM, lateral flow of soil water, increases in soil moisture and reduction of river discharge due to the irrigation, and its effects on the rice growth can be calculated. Agricultural information such as planting date, rice cultivar, fertilization amount are given in a fully distributed manner. The coupled model was validated using LAI and soil moisture in a small basin in western Cambodia (Sangker River Basin). This basin is mostly rainfed paddy so that irrigation scheme was firstly switched off. Several simulations with varying irrigation scheme were performed to determine the optimal irrigation schedule in this basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Shi, F; Gu, X
2016-06-15
Purpose: This proof-of-concept study is to develop a real-time Monte Carlo (MC) based treatment-dose reconstruction and monitoring system for radiotherapy, especially for the treatments with complicated delivery, to catch treatment delivery errors at the earliest possible opportunity and interrupt the treatment only when an unacceptable dosimetric deviation from our expectation occurs. Methods: First an offline scheme is launched to pre-calculate the expected dose from the treatment plan, used as ground truth for real-time monitoring later. Then an online scheme with three concurrent threads is launched while treatment delivering, to reconstruct and monitor the patient dose in a temporally resolved fashionmore » in real-time. Thread T1 acquires machine status every 20 ms to calculate and accumulate fluence map (FM). Once our accumulation threshold is reached, T1 transfers the FM to T2 for dose reconstruction ad starts to accumulate a new FM. A GPU-based MC dose calculation is performed on T2 when MC dose engine is ready and a new FM is available. The reconstructed instantaneous dose is directed to T3 for dose accumulation and real-time visualization. Multiple dose metrics (e.g. maximum and mean dose for targets and organs) are calculated from the current accumulated dose and compared with the pre-calculated expected values. Once the discrepancies go beyond our tolerance, an error message will be send to interrupt the treatment delivery. Results: A VMAT Head-and-neck patient case was used to test the performance of our system. Real-time machine status acquisition was simulated here. The differences between the actual dose metrics and the expected ones were 0.06%–0.36%, indicating an accurate delivery. ∼10Hz frequency of dose reconstruction and monitoring was achieved, with 287.94s online computation time compared to 287.84s treatment delivery time. Conclusion: Our study has demonstrated the feasibility of computing a dose distribution in a temporally resolved fashion in real-time and quantitatively and dosimetrically monitoring the treatment delivery.« less
Dynamic Restarting Schemes for Eigenvalue Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Simon, Horst D.
1999-03-10
In studies of restarted Davidson method, a dynamic thick-restart scheme was found to be excellent in improving the overall effectiveness of the eigen value method. This paper extends the study of the dynamic thick-restart scheme to the Lanczos method for symmetric eigen value problems and systematically explore a range of heuristics and strategies. We conduct a series of numerical tests to determine their relative strength and weakness on a class of electronic structure calculation problems.
Fuel quality/processing study. Volume 3: Fuel upgrading studies
NASA Technical Reports Server (NTRS)
Jones, G. E., Jr.; Bruggink, P.; Sinnett, C.
1981-01-01
The methods used to calculate the refinery selling prices for the turbine fuels of low quality are described. Detailed descriptions and economics of the upgrading schemes are included. These descriptions include flow diagrams showing the interconnection between processes and the stream flows involved. Each scheme is in a complete, integrated, stand alone facility. Except for the purchase of electricity and water, each scheme provides its own fuel and manufactures, when appropriate, its own hydrogen.
Cooling options for high-average-power laser mirrors
NASA Astrophysics Data System (ADS)
Vojna, D.; Slezak, O.; Lucianetti, A.; Mocek, T.
2015-01-01
Thermally-induced deformations of steering mirrors reflecting 100 J/10 Hz laser pulses in vacuum have been analyzed. This deformation is caused by the thermal stress arisen due to parasitic absorption of 1 kW square-shaped flat-top laser beam in the dielectric multi-layer structure. Deformation depends on amount of absorbed power and geometry of the mirror as well as on the heat removal scheme. In our calculations, the following percentages of absorption of the incident power have been used: 1%, 0.5% and 0.1%. The absorbed power has been considered to be much higher than that expected in reality to assess the worst case scenario. Rectangular and circular mirrors made of zerodur (low thermal expansion glass) were considered for these simulations. The effect of coating layers on induced deformations has been neglected. Induced deformation of the mirror surface can significantly degrade the quality of the laser beam in the beam delivery system. Therefore, the proper design of the cooling scheme for the mirror in order to minimize the deformations is needed. Three possible cooling schemes of the mirror have been investigated. The first one takes advantage of a radiation cooling of the mirror and a copper heatsink fixed to the rear face of the mirror, the second scheme is based on additional heat conduction provided by flexible copper wires connected to the mirror holder, and the last scheme combines two above mentioned methods.
Control of parallel manipulators using force feedback
NASA Technical Reports Server (NTRS)
Nanua, Prabjot
1994-01-01
Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.
A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry
NASA Astrophysics Data System (ADS)
Stamer, Torsten; Inutsuka, Shu-ichiro
2018-06-01
We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.
A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
NASA Technical Reports Server (NTRS)
Ghosh, Amitabha
1997-01-01
This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.
Nonperturbative renormalization of quark bilinear operators and B{sub K} using domain wall fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoki, Y.; Dawson, C.; Brookhaven National Laboratory, Upton, New York 11973
2008-09-01
We present a calculation of the renormalization coefficients of the quark bilinear operators and the K-K mixing parameter B{sub K}. The coefficients relating the bare lattice operators to those in the RI/MOM scheme are computed nonperturbatively and then matched perturbatively to the MS scheme. The coefficients are calculated on the RBC/UKQCD 2+1 flavor dynamical lattice configurations. Specifically we use a 16{sup 3}x32 lattice volume, the Iwasaki gauge action at {beta}=2.13 and domain wall fermions with L{sub s}=16.
Pfeiffer, Florian; Rauhut, Guntram
2011-10-13
Accurate anharmonic frequencies are provided for molecules of current research, i.e., diazirines, diazomethane, the corresponding fluorinated and deuterated compounds, their dioxygen analogs, and others. Vibrational-state energies were obtained from state-specific vibrational multiconfiguration self-consistent field theory (VMCSCF) based on multilevel potential energy surfaces (PES) generated from explicitly correlated coupled cluster, CCSD(T)-F12a, and double-hybrid density functional calculations, B2PLYP. To accelerate the vibrational structure calculations, a configuration selection scheme as well as a polynomial representation of the PES have been exploited. Because experimental data are scarce for these systems, many calculated frequencies of this study are predictions and may guide experiments to come.
A biomechanical model for actively controlled snow ski bindings.
Hull, M L; Ramming, J E
1980-11-01
Active control of snow ski bindings is a new design concept which potentially offers improved protection from lower extremity injury. Implementation of this concept entails measuring physical variables and calculating loading and/or deformation in injury prone musculoskeletal components. The subject of this paper is definition of a biomechanical model for calculating tibia torsion based on measurements of torsion loading between the boot and ski. Previous control schemes have used leg displacement only to indicate tibia torsion. The contributions of both inertial and velocity-dependent torques to tibia loading are explored and it is shown that both these moments must be included in addition to displacement-dependent moments. A new analog controller design which includes inertia, damping, and stiffness terms in the tibia load calculation is also presented.
NASA Astrophysics Data System (ADS)
Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2018-03-01
We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.
Surface Modeling of Workpiece and Tool Trajectory Planning for Spray Painting Robot
Tang, Yang; Chen, Wei
2015-01-01
Automated tool trajectory planning for spray-painting robots is still a challenging problem, especially for a large free-form surface. A grid approximation of a free-form surface is adopted in CAD modeling in this paper. A free-form surface model is approximated by a set of flat patches. We describe here an efficient and flexible tool trajectory optimization scheme using T-Bézier curves calculated in a new way from trigonometrical bases. The distance between the spray gun and the free-form surface along the normal vector is varied. Automotive body parts, which are large free-form surfaces, are used to test the scheme. The experimental results show that the trajectory planning algorithm achieves satisfactory performance. This algorithm can also be extended to other applications. PMID:25993663
Surface modeling of workpiece and tool trajectory planning for spray painting robot.
Tang, Yang; Chen, Wei
2015-01-01
Automated tool trajectory planning for spray-painting robots is still a challenging problem, especially for a large free-form surface. A grid approximation of a free-form surface is adopted in CAD modeling in this paper. A free-form surface model is approximated by a set of flat patches. We describe here an efficient and flexible tool trajectory optimization scheme using T-Bézier curves calculated in a new way from trigonometrical bases. The distance between the spray gun and the free-form surface along the normal vector is varied. Automotive body parts, which are large free-form surfaces, are used to test the scheme. The experimental results show that the trajectory planning algorithm achieves satisfactory performance. This algorithm can also be extended to other applications.
A nonlinear estimator for reconstructing the angular velocity of a spacecraft without rate gyros
NASA Technical Reports Server (NTRS)
Polites, M. E.; Lightsey, W. D.
1991-01-01
A scheme for estimating the angular velocity of a spacecraft without rate gyros is presented. It is based upon a nonlinear estimator whose inputs are measured inertial vectors and their calculated time derivatives relative to vehicle axes. It works for all spacecraft attitudes and requires no knowledge of attitude. It can use measurements from a variety of onboard sensors like Sun sensors, star trackers, or magnetometers, and in concert. It can also use look angle measurements from onboard tracking antennas for tracking and data relay satellites or global positioning system satellites. In this paper, it is applied to a Sun point scheme on the Hubble Space Telescope assuming all or most of its onboard rate gyros have failed. Simulation results are presented for verification.
Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.
Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A
2018-02-01
We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.
Numerical analysis of multicomponent responses of surface-hole transient electromagnetic method
NASA Astrophysics Data System (ADS)
Meng, Qing-Xin; Hu, Xiang-Yun; Pan, He-Ping; Zhou, Feng
2017-03-01
We calculate the multicomponent responses of surface-hole transient electromagnetic method. The methods and models are unsuitable as geoelectric models of conductive surrounding rocks because they are based on regular local targets. We also propose a calculation and analysis scheme based on numerical simulations of the subsurface transient electromagnetic fields. In the modeling of the electromagnetic fields, the forward modeling simulations are performed by using the finite-difference time-domain method and the discrete image method, which combines the Gaver-Stehfest inverse Laplace transform with the Prony method to solve the initial electromagnetic fields. The precision in the iterative computations is ensured by using the transmission boundary conditions. For the response analysis, we customize geoelectric models consisting of near-borehole targets and conductive wall rocks and implement forward modeling simulations. The observed electric fields are converted into induced electromotive force responses using multicomponent observation devices. By comparing the transient electric fields and multicomponent responses under different conditions, we suggest that the multicomponent-induced electromotive force responses are related to the horizontal and vertical gradient variations of the transient electric field at different times. The characteristics of the response are determined by the varying the subsurface transient electromagnetic fields, i.e., diffusion, attenuation and distortion, under different conditions as well as the electromagnetic fields at the observation positions. The calculation and analysis scheme of the response consider the surrounding rocks and the anomalous field of the local targets. It therefore can account for the geological data better than conventional transient field response analysis of local targets.
Expressing analytical performance from multi-sample evaluation in laboratory EQA.
Thelen, Marc H M; Jansen, Rob T P; Weykamp, Cas W; Steigstra, Herman; Meijer, Ron; Cobbaert, Christa M
2017-08-28
To provide its participants with an external quality assessment system (EQAS) that can be used to check trueness, the Dutch EQAS organizer, Organization for Quality Assessment of Laboratory Diagnostics (SKML), has innovated its general chemistry scheme over the last decade by introducing fresh frozen commutable samples whose values were assigned by Joint Committee for Traceability in Laboratory Medicine (JCTLM)-listed reference laboratories using reference methods where possible. Here we present some important innovations in our feedback reports that allow participants to judge whether their trueness and imprecision meet predefined analytical performance specifications. Sigma metrics are used to calculate performance indicators named 'sigma values'. Tolerance intervals are based on both Total Error allowable (TEa) according to biological variation data and state of the art (SA) in line with the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Milan consensus. The existing SKML feedback reports that express trueness as the agreement between the regression line through the results of the last 12 months and the values obtained from reference laboratories and calculate imprecision from the residuals of the regression line are now enriched with sigma values calculated from the degree to which the combination of trueness and imprecision are within tolerance limits. The information and its conclusion to a simple two-point scoring system are also graphically represented in addition to the existing difference plot. By adding sigma metrics-based performance evaluation in relation to both TEa and SA tolerance intervals to its EQAS schemes, SKML provides its participants with a powerful and actionable check on accuracy.
Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks
NASA Astrophysics Data System (ADS)
Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue
2013-03-01
With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R. L.; Damewood, L.; Zeng, Y. J.
To search for half-metallic materials for spintronic applications, instead of using an expensive trial-and-error experimental scheme, it is more efficient to use first-principles calculations to design materials first, and then grow them. In particular, using a priori information of the structural stability and the effect of the spin–orbit interaction (SOI) enables experimentalists to focus on favorable properties that make growing half-metals easier. We suggest that using acoustic phonon spectra is the best way to address the stability of promising half-metallic materials. Additionally, by carrying out accurate first-principles calculations, we propose two criteria for neglecting the SOI so the half-metallicity persists.more » As a result, based on the mechanical stability and the negligible SOI, we identified two half-metals, β-LiCrAs and β-LiMnSi, as promising half-Heusler alloys worth growing.« less
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
Soft evolution of multi-jet final states
Gerwick, Erik; Schumann, Steffen; Höche, Stefan; ...
2015-02-16
We present a new framework for computing resummed and matched distributions in processes with many hard QCD jets. The intricate color structure of soft gluon emission at large angles renders resummed calculations highly non-trivial in this case. We automate all ingredients necessary for the color evolution of the soft function at next-to-leading-logarithmic accuracy, namely the selection of the color bases and the projections of color operators and Born amplitudes onto those bases. Explicit results for all QCD processes with up to 2 → 5 partons are given. We also devise a new tree-level matching scheme for resummed calculations which exploitsmore » a quasi-local subtraction based on the Catani-Seymour dipole formalism. We implement both resummation and matching in the Sherpa event generator. As a proof of concept, we compute the resummed and matched transverse-thrust distribution for hadronic collisions.« less
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Zhang, R. L.; Damewood, L.; Zeng, Y. J.; ...
2017-07-07
To search for half-metallic materials for spintronic applications, instead of using an expensive trial-and-error experimental scheme, it is more efficient to use first-principles calculations to design materials first, and then grow them. In particular, using a priori information of the structural stability and the effect of the spin–orbit interaction (SOI) enables experimentalists to focus on favorable properties that make growing half-metals easier. We suggest that using acoustic phonon spectra is the best way to address the stability of promising half-metallic materials. Additionally, by carrying out accurate first-principles calculations, we propose two criteria for neglecting the SOI so the half-metallicity persists.more » As a result, based on the mechanical stability and the negligible SOI, we identified two half-metals, β-LiCrAs and β-LiMnSi, as promising half-Heusler alloys worth growing.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.; Warming, R. F.; Harten, A.
1985-01-01
First-order, second-order, and implicit total variation diminishing (TVD) schemes are reviewed using the modified flux approach. Some transient and steady-state calculations are then carried out to illustrate the applicability of these schemes to the Euler equations. It is shown that the second-order explicit TVD schemes generate good shock resolution for both transient and steady-state one-dimensional and two-dimensional problems. Numerical experiments for a quasi-one-dimensional nozzle problem show that the second-order implicit TVD scheme produces a fairly rapid convergence rate and remains stable even when running with a Courant number of 10 to the 6th.
The constant displacement scheme for tracking particles in heterogeneous aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A.; Maslanik, J.; Key, J.; Weaver, R.; Barry, R.
1990-01-01
The application of multi-spectral satellite data to estimate polar surface energy fluxes is addressed. To what accuracy and over which geographic areas large scale energy budgets can be estimated are investigated based upon a combination of available remote sensing and climatological data sets. The general approach was to: (1) formulate parameterization schemes for the appropriate sea ice energy budget terms based upon the remotely sensed and/or in-situ data sets; (2) conduct sensitivity analyses using as input both natural variability (observed data in regional case studies) and theoretical variability based upon energy flux model concepts; (3) assess the applicability of these parameterization schemes to both regional and basin wide energy balance estimates using remote sensing data sets; and (4) assemble multi-spectral, multi-sensor data sets for at least two regions of the Arctic Basin and possibly one region of the Antarctic. The type of data needed for a basin-wide assessment is described and the temporal coverage of these data sets are determined by data availability and need as defined by parameterization scheme. The titles of the subjects are as follows: (1) Heat flux calculations from SSM/I and LANDSAT data in the Bering Sea; (2) Energy flux estimation using passive microwave data; (3) Fetch and stability sensitivity estimates of turbulent heat flux; and (4) Surface temperature algorithm.
Core excitations across the neutron shell gap in 207Tl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, E.; Podolyák, Zs.; Grawe, H.
2015-05-05
The single closed-neutron-shell, one proton–hole nucleus 207Tl was populated in deep-inelastic collisions of a 208Pb beam with a 208Pb target. The yrast and near-yrast level scheme has been established up to high excitation energy, comprising an octupole phonon state and a large number of core excited states. Based on shell-model calculations, all observed single core excitations were established to arise from the breaking of the N=126 neutron core. While the shell-model calculations correctly predict the ordering of these states, their energies are compressed at high spins. It is concluded that this compression is an intrinsic feature of shell-model calculations usingmore » two-body matrix elements developed for the description of two-body states, and that multiple core excitations need to be considered in order to accurately calculate the energy spacings of the predominantly three-quasiparticle states.« less
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1981-01-01
A computer program HADY-I for calculating the linear incompressible or compressible stability characteristics of the laminar boundary layer on swept and tapered wings is described. The eigenvalue problem and its adjoint arising from the linearized disturbance equations with the appropriate boundary conditions are solved numerically using a combination of Newton-Raphson interative scheme and a variable step size integrator based on the Runge-Kutta-Fehlburh fifth-order formulas. The integrator is used in conjunction with a modified Gram-Schmidt orthonormalization procedure. The computer program HADY-I calculates the growth rates of crossflow or streamwise Tollmien-Schlichting instabilities. It also calculates the group velocities of these disturbances. It is restricted to parallel stability calculations, where the boundary layer (meanflow) is assumed to be parallel. The meanflow solution is an input to the program.
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
One-loop corrections to light cone wave functions: The dipole picture DIS cross section
NASA Astrophysics Data System (ADS)
Hänninen, H.; Lappi, T.; Paatelainen, R.
2018-06-01
We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.
A 3D model retrieval approach based on Bayesian networks lightfield descriptor
NASA Astrophysics Data System (ADS)
Xiao, Qinhan; Li, Yanjun
2009-12-01
A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
Assessing and Upgrading Ocean Mixing for the Study of Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.
2016-12-01
Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate science by improving our mixing schemes. We are incorporating climate research into our college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293.
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi
2018-03-01
We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.
Equivalent-circuit models for electret-based vibration energy harvesters
NASA Astrophysics Data System (ADS)
Phu Le, Cuong; Halvorsen, Einar
2017-08-01
This paper presents a complete analysis to build a tool for modelling electret-based vibration energy harvesters. The calculational approach includes all possible effects of fringing fields that may have significant impact on output power. The transducer configuration consists of two sets of metal strip electrodes on a top substrate that faces electret strips deposited on a bottom movable substrate functioning as a proof mass. Charge distribution on each metal strip is expressed by series expansion using Chebyshev polynomials multiplied by a reciprocal square-root form. The Galerkin method is then applied to extract all charge induction coefficients. The approach is validated by finite element calculations. From the analytic tool, a variety of connection schemes for power extraction in slot-effect and cross-wafer configurations can be lumped to a standard equivalent circuit with inclusion of parasitic capacitance. Fast calculation of the coefficients is also obtained by a proposed closed-form solution based on leading terms of the series expansions. The achieved analytical result is an important step for further optimisation of the transducer geometry and maximising harvester performance.
Kussmann, Jörg; Ochsenfeld, Christian
2007-11-28
A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.
Comparison of different pairing fluctuation approaches to BCS-BEC crossover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Kathryn; Chen Qijin; Zhejiang Institute of Modern Physics and Department of Physics, Zhejiang University, Hangzhou, Zhejiang 310027
2010-02-15
The subject of BCS-Bose-Einstein condensation (BEC) crossover is particularly exciting because of its realization in ultracold atomic Fermi gases and its possible relevance to high temperature superconductors. In this paper we review the body of theoretical work on this subject, which represents a natural extension of the seminal papers by Leggett and by Nozieres and Schmitt-Rink (NSR). The former addressed only the ground state, now known as the 'BCS-Leggett' wave-function, and the key contributions of the latter pertain to calculations of the superfluid transition temperature T{sub c}. These two papers have given rise to two main and, importantly, distinct, theoreticalmore » schools in the BCS-BEC crossover literature. The first of these extends the BCS-Leggett ground state to finite temperature and the second extends the NSR scheme away from T{sub c} both in the superfluid and normal phases. It is now rather widely accepted that these extensions of NSR produce a different ground state than that first introduced by Leggett. This observation provides a central motivation for the present paper which seeks to clarify the distinctions in the two approaches. Our analysis shows how the NSR-based approach views the bosonic contributions more completely but treats the fermions as 'quasi-free'. By contrast, the BCS-Leggett based approach treats the fermionic contributions more completely but treats the bosons as 'quasi-free'. In a related fashion, the NSR-based schemes approach the crossover between BCS and BEC by starting from the BEC limit and the BCS-Leggett based scheme approaches this crossover by starting from the BCS limit. Ultimately, one would like to combine these two schemes. There are, however, many difficult problems to surmount in any attempt to bridge the gap in the two theory classes. In this paper we review the strengths and weaknesses of both approaches. The flexibility of the BCS-Leggett based approach and its ease of handling make it widely used in T=0 applications, although the NSR-based schemes tend to be widely used at T{ne}0. To reach a full understanding, it is important in the future to invest effort in investigating in more detail the T=0 aspects of NSR-based theory and at the same time the T{ne}0 aspects of BCS-Leggett theory.« less
NASA Astrophysics Data System (ADS)
Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul
An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
The fragment spin difference scheme for triplet-triplet energy transfer coupling
NASA Astrophysics Data System (ADS)
You, Zhi-Qiang; Hsu, Chao-Ping
2010-08-01
To calculate the electronic couplings in both inter- and intramolecular triplet energy transfer (TET), we have developed the "fragment spin difference" (FSD) scheme. The FSD was a generalization from the "fragment charge difference" (FCD) method of Voityuk et al. [J. Chem. Phys. 117, 5607 (2002)] for electron transfer (ET) coupling. In FSD, the spin population difference was used in place of the charge difference in FCD. FSD is derived from the eigenstate energies and populations, and therefore the FSD couplings contain all contributions in the Hamiltonian as well as the potential overlap effect. In the present work, two series of molecules, all-trans-polyene oligomers and polycyclic aromatic hydrocarbons, were tested for intermolecular TET study. The TET coupling results are largely similar to those from the previously developed direct coupling scheme, with FSD being easier and more flexible in use. On the other hand, the Dexter's exchange integral value, a quantity that is often used as an approximate for the TET coupling, varies in a large range as compared to the corresponding TET coupling. To test the FSD for intramolecular TET, we have calculated the TET couplings between zinc(II)-porphyrin and free-base porphyrin separated by different numbers of p-phenyleneethynylene bridge units. Our estimated rate constants are consistent with experimentally measured TET rates. The FSD method can be used for both intermolecular and intramolecular TET, regardless of their symmetry. This general applicability is an improvement over most existing methodologies.
The Dualized Standard Model and its Applications — AN Interim Report
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
Based on a non-Abelian generalization of electric-magnetic duality, the Dualized Standard Model (DSM) suggests a natural explanation for exactly three generations of fermions as the "dual colour" widetilde SU (3) symmetry broken in a particular manner. The resulting scheme then offers on the one hand a fermion mass hierarchy and a perturbative method for calculating the mass and mixing parameters of the Standard Model fermions, and on the other hand testable predictions for new phenomena ranging from rare meson decays to ultra-high energy cosmic rays. Calculations to one-loop order gives, at the cost of adjusting only three real parameters, values for the following quantities all (except one) in very good agreement with experiment: the quark CKM matrix elements dvbr Vrsdvbr , the lepton CKM matrix elements dvbr Ursdvbr, and the second generation masses mc, ms, mμ. This means, in particular, that it gives near maximal mixing Uμ3 between νμ and ντ as observed by SuperKamiokande, Kamiokande and Soudan, while keeping small the corresponding quark angles Vcb, Vts. In addition, the scheme gives (i) rough order-of-magnitude estimates for the masses of the lowest generation, (ii) predictions for low energy FCNC effects such as KL→ eμ, and (iii) a possible explanation for the long-standing puzzle of air showers beyond the GZK cut-off. All these together, however, still represent but a portion of the possible physical consequences derivable from the DSM scheme, the majority of which are yet to be explored.
A new family of high-order compact upwind difference schemes with good spectral resolution
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.
2007-12-01
This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.
Time domain numerical calculations of unsteady vortical flows about a flat plate airfoil
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Yu, Ping; Scott, J. R.
1989-01-01
A time domain numerical scheme is developed to solve for the unsteady flow about a flat plate airfoil due to imposed upstream, small amplitude, transverse velocity perturbations. The governing equation for the resulting unsteady potential is a homogeneous, constant coefficient, convective wave equation. Accurate solution of the problem requires the development of approximate boundary conditions which correctly model the physics of the unsteady flow in the far field. A uniformly valid far field boundary condition is developed, and numerical results are presented using this condition. The stability of the scheme is discussed, and the stability restriction for the scheme is established as a function of the Mach number. Finally, comparisons are made with the frequency domain calculation by Scott and Atassi, and the relative strengths and weaknesses of each approach are assessed.
A group electronegativity equalization scheme including external potential effects.
Leyssens, Tom; Geerlings, Paul; Peeters, Daniel
2006-07-20
By calculating the electron affinity and ionization energy of different functional groups, CCSD electronegativity values are obtained, which implicitly account for the effect of the molecular environment. This latter is approximated using a chemically justified point charge model. On the basis of Sanderson's electronegativity equalization principle, this approach is shown to lead to reliable "group in molecule" electronegativities. Using a slight adjustment of the modeled environment and first-order principles, an electronegativity equalization scheme is obtained, which implicitly accounts for the major part of the external potential effect. This scheme can be applied in a predictive manner to estimate the charge transfer between two functional groups, without having to rely on cumbersome calibrations. A very satisfactory correlation is obtained between these charge transfers and those obtained from an ab initio calculation of the entire molecule.
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Reduced Equations for Calculating the Combustion Rates of Jet-A and Methane Fuel
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2003-01-01
Simplified kinetic schemes for Jet-A and methane fuels were developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) that is being developed at Glenn. These kinetic schemes presented here result in a correlation that gives the chemical kinetic time as a function of initial overall cell fuel/air ratio, pressure, and temperature. The correlations would then be used with the turbulent mixing times to determine the limiting properties and progress of the reaction. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentration of carbon monoxide as a function of fuel air ratio, pressure, and temperature. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates and the values obtained from the equilibrium correlations were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide, and NOx were obtained for both Jet-A fuel and methane.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
Notes on the ExactPack Implementation of the DSD Rate Stick Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaul, Ann
It has been shown above that the discretization scheme implemented in the ExactPack solver for the DSD Rate Stick equation is consistent with the Rate Stick PDE. In addition, a stability analysis has provided a CFL condition for a stable time step. Together, consistency and stability imply convergence of the scheme, which is expected to be close to first-order in time and second-order in space. It is understood that the nonlinearity of the underlying PDE will affect this rate somewhat. In the solver I implemented in ExactPack, I used the one-sided boundary condition described above at the outer boundary. Inmore » addition, I used 80% of the time step calculated in the stability analysis above. By making these two changes, I was able to implement a solver that calculates the solution without any arbitrary limits placed on the values of the curvature at the boundary. Thus, the calculation is driven directly by the conditions at the boundary as formulated in the DSD theory. The chosen scheme is completely coherent and defensible from a mathematical standpoint.« less
NASA Astrophysics Data System (ADS)
Avilova, I. P.; Krutilova, M. O.
2018-01-01
Economic growth is the main determinant of the trend to increased greenhouse gas (GHG) emission. Therefore, the reduction of emission and stabilization of GHG levels in the atmosphere become an urgent task to avoid the worst predicted consequences of climate change. GHG emissions in construction industry take a significant part of industrial GHG emission and are expected to consistently increase. The problem could be successfully solved with a help of both economical and organizational restrictions, based on enhanced algorithms of calculation and amercement of environmental harm in building industry. This study aims to quantify of GHG emission caused by different constructive schemes of RC framework in concrete casting. The result shows that proposed methodology allows to make a comparative analysis of alternative projects in residential housing, taking into account an environmental damage, caused by construction process. The study was carried out in the framework of the Program of flagship university development on the base of Belgorod State Technological University named after V.G. Shoukhov
Proynov, Emil; Liu, Fenglai; Gan, Zhengting; Wang, Matthew; Kong, Jing
2015-01-01
We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C9 dispersion coefficients is done in a non-empirical fashion. The obtained C9 values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C9 values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at short distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He3 and Ar3 trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters. PMID:26328836
Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G
2014-09-05
A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.
Investigation of the transient fuel preburner manifold and combustor
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Farmer, Richard C.
1989-01-01
A computational fluid dynamics (CFD) model with finite rate reactions, FDNS, was developed to study the start transient of the Space Shuttle Main Engine (SSME) fuel preburner (FPB). FDNS is a time accurate, pressure based CFD code. An upwind scheme was employed for spatial discretization. The upwind scheme was based on second and fourth order central differencing with adaptive artificial dissipation. A state of the art two-equation k-epsilon (T) turbulence model was employed for the turbulence calculation. A Pade' Rational Solution (PARASOL) chemistry algorithm was coupled with the point implicit procedure. FDNS was benchmarked with three well documented experiments: a confined swirling coaxial jet, a non-reactive ramjet dump combustor, and a reactive ramjet dump combustor. Excellent comparisons were obtained for the benchmark cases. The code was then used to study the start transient of an axisymmetric SSME fuel preburner. Predicted transient operation of the preburner agrees well with experiment. Furthermore, it was also found that an appreciable amount of unburned oxygen entered the turbine stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proynov, Emil; Wang, Matthew; Kong, Jing, E-mail: jing.kong@mtsu.edu
We implement and compute the density functional nonadditive three-body dispersion interaction using a combination of Tang-Karplus formalism and the exchange-dipole moment model of Becke and Johnson. The computation of the C{sub 9} dispersion coefficients is done in a non-empirical fashion. The obtained C{sub 9} values of a series of noble atom triplets agree well with highly accurate values in the literature. We also calculate the C{sub 9} values for a series of benzene trimers and find a good agreement with high-level ab initio values reported recently in the literature. For the question of damping of the three-body dispersion at shortmore » distances, we propose two damping schemes and optimize them based on the benzene trimers data, and the fitted analytic potentials of He{sub 3} and Ar{sub 3} trimers fitted to the results of high-level wavefunction theories available from the literature. Both damping schemes respond well to the optimization of two parameters.« less
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong
2013-02-01
A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.
Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter
NASA Astrophysics Data System (ADS)
Nnolim, Uche A.
2016-07-01
An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.
Wang, Long; Liu, Yong; Yin, Zengshan
2018-01-01
To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions. PMID:29757243
Wang, Long; Liu, Yong; Yin, Zengshan
2018-05-12
To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions.
NASA Astrophysics Data System (ADS)
Zhai, Guoqing; Li, Xiaofan
2015-04-01
The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.