Sample records for time-dependent calculation scheme

  1. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  2. Time dependent density functional calculation of plasmon response in clusters

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhang, Feng-Shou; Eric, Suraud

    2003-02-01

    We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

  3. Watching excitons move: the time-dependent transition density matrix

    NASA Astrophysics Data System (ADS)

    Ullrich, Carsten

    2012-02-01

    Time-dependent density-functional theory allows one to calculate excitation energies and the associated transition densities in principle exactly. The transition density matrix (TDM) provides additional information on electron-hole localization and coherence of specific excitations of the many-body system. We have extended the TDM concept into the real-time domain in order to visualize the excited-state dynamics in conjugated molecules. The time-dependent TDM is defined as an implicit density functional, and can be approximately obtained from the time-dependent Kohn-Sham orbitals. The quality of this approximation is assessed in simple model systems. A computational scheme for real molecular systems is presented: the time-dependent Kohn-Sham equations are solved with the OCTOPUS code and the time-dependent Kohn-Sham TDM is calculated using a spatial partitioning scheme. The method is applied to show in real time how locally created electron-hole pairs spread out over neighboring conjugated molecular chains. The coupling mechanism, electron-hole coherence, and the possibility of charge separation are discussed.

  4. Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1991-01-01

    Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.

  5. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  6. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  7. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  8. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  9. Self-consistent DFT +U method for real-space time-dependent density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel

    2017-12-01

    We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.

  10. Propagators for the Time-Dependent Kohn-Sham Equations: Multistep, Runge-Kutta, Exponential Runge-Kutta, and Commutator Free Magnus Methods.

    PubMed

    Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto

    2018-05-09

    We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.

  11. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  12. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  13. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme.

    PubMed

    Li, Shaohong L; Truhlar, Donald G

    2015-07-14

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations and atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.

  14. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaohong L.; Truhlar, Donald G.

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  15. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE PAGES

    Li, Shaohong L.; Truhlar, Donald G.

    2015-05-22

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  16. Global properties in an experimental realization of time-delayed feedback control with an unstable control loop.

    PubMed

    Höhne, Klaus; Shirahama, Hiroyuki; Choe, Chol-Ung; Benner, Hartmut; Pyragas, Kestutis; Just, Wolfram

    2007-05-25

    We demonstrate by electronic circuit experiments the feasibility of an unstable control loop to stabilize torsion-free orbits by time-delayed feedback control. Corresponding analytical normal form calculations and numerical simulations reveal a severe dependence of the basin of attraction on the particular coupling scheme of the control force. Such theoretical predictions are confirmed by the experiments and emphasize the importance of the coupling scheme for the global control performance.

  17. An extrapolation scheme for solid-state NMR chemical shift calculations

    NASA Astrophysics Data System (ADS)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  18. Optimised effective potential for ground states, excited states, and time-dependent phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross, E.K.U.

    1996-12-31

    (1) The optimized effective potential method is a variant of the traditional Kohn-Sham scheme. In this variant, the exchange-correlation energy E{sub xc} is an explicit functional of single-particle orbitals. The exchange-correlation potential, given as usual by the functional derivative v{sub xc} = {delta}E{sub xc}/{delta}{rho}, then satisfies as integral equation involving the single-particle orbitals. This integral equation in solved semi-analytically using a scheme recently proposed by Krieger, Li and Iafrate. If the exact (Fock) exchange-energy functional is employed together with the Colle-Salvetti orbital functional for the correlation energy, the mean absolute deviation of the resulting ground-state energies from the exact nonrelativisticmore » values is CT mH for the first-row atoms, as compared to 4.5 mH in a state-of-the-art CI calculation. The proposed scheme is thus significantly more accurate than the conventional Kohn-Sham method while the numerical effort involved is about the same as for an ordinary Hanree-Fock calculation. (2) A time-dependent generalization of the optimized-potential method is presented and applied to the linear-response regime. Since time-dependent density functional theory leads to a formally exact representation of the frequency-dependent linear density response and since the latter, as a function of frequency, has poles at the excitation energies of the fully interacting system, the formalism is suitable for the calculation of excitation energies. A simple additive correction to the Kohn-Sham single-particle excitation energies will be deduced and first results for atomic and molecular singlet and triplet excitation energies will be presented. (3) Beyond the regime of linear response, the time-dependent optimized-potential method is employed to describe atoms in strong emtosecond laser pulses. Ionization yields and harmonic spectra will be presented and compared with experimental data.« less

  19. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, J.; Nicholson, S.; Moore, J. G.

    1985-01-01

    Research at NASA Lewis Research Center gave the opportunity to incorporate new control volumes in the Denton 3-D finite-volume time marching code. For duct flows, the new control volumes require no transverse smoothing and this allows calculations with large transverse gradients in properties without significant numerical total pressure losses. Possibilities for improving the Denton code to obtain better distributions of properties through shocks were demonstrated. Much better total pressure distributions through shocks are obtained when the interpolated effective pressure, needed to stabilize the solution procedure, is used to calculate the total pressure. This simple change largely eliminates the undershoot in total pressure down-stream of a shock. Overshoots and undershoots in total pressure can then be further reduced by a factor of 10 by adopting the effective density method, rather than the effective pressure method. Use of a Mach number dependent interpolation scheme for pressure then removes the overshoot in static pressure downstream of a shock. The stability of interpolation schemes used for the calculation of effective density is analyzed and a Mach number dependent scheme is developed, combining the advantages of the correct perfect gas equation for subsonic flow with the stability of 2-point and 3-point interpolation schemes for supersonic flow.

  20. Time-dependent first-principles study of angle-resolved secondary electron emission from atomic sheets

    NASA Astrophysics Data System (ADS)

    Ueda, Yoshihiro; Suzuki, Yasumitsu; Watanabe, Kazuyuki

    2018-02-01

    Angle-resolved secondary electron emission (ARSEE) spectra were analyzed for two-dimensional atomic sheets using a time-dependent first-principles simulation of electron scattering. We demonstrate that the calculated ARSEE spectra capture the unoccupied band structure of the atomic sheets. The excitation dynamics that lead to SEE have also been revealed by the time-dependent Kohn-Sham decomposition scheme. In the present study, the mechanism for the experimentally observed ARSEE from atomic sheets is elucidated with respect to both energetics and the dynamical aspects of SEE.

  1. Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.

    PubMed

    Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth

    2015-10-13

    We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.

  2. Introduction of the Floquet-Magnus expansion in solid-state nuclear magnetic resonance spectroscopy.

    PubMed

    Mananga, Eugène S; Charpentier, Thibault

    2011-07-28

    In this article, we present an alternative expansion scheme called Floquet-Magnus expansion (FME) used to solve a time-dependent linear differential equation which is a central problem in quantum physics in general and solid-state nuclear magnetic resonance (NMR) in particular. The commonly used methods to treat theoretical problems in solid-state NMR are the average Hamiltonian theory (AHT) and the Floquet theory (FT), which have been successful for designing sophisticated pulse sequences and understanding of different experiments. To the best of our knowledge, this is the first report of the FME scheme in the context of solid state NMR and we compare this approach with other series expansions. We present a modified FME scheme highlighting the importance of the (time-periodic) boundary conditions. This modified scheme greatly simplifies the calculation of higher order terms and shown to be equivalent to the Floquet theory (single or multimode time-dependence) but allows one to derive the effective Hamiltonian in the Hilbert space. Basic applications of the FME scheme are described and compared to previous treatments based on AHT, FT, and static perturbation theory. We discuss also the convergence aspects of the three schemes (AHT, FT, and FME) and present the relevant references. © 2011 American Institute of Physics

  3. A new scheme of the time-domain fluorescence tomography for a semi-infinite turbid medium

    NASA Astrophysics Data System (ADS)

    Prieto, Kernel; Nishimura, Goro

    2017-04-01

    A new scheme for reconstruction of a fluorophore target embedded in a semi-infinite medium was proposed and evaluated. In this scheme, we neglected the presence of the fluorophore target for the excitation light and used an analytical solution of the time-dependent radiative transfer equation (RTE) for the excitation light in a homogeneous semi-infinite media instead of solving the RTE numerically in the forward calculation. The inverse problem for imaging the fluorophore target was solved using the Landweber-Kaczmarz method with the concept of the adjoint fields. Numerical experiments show that the proposed scheme provides acceptable results of the reconstructed shape and location of the target. The computation times of the solution of the forward problem and the whole reconstruction process were reduced by about 40 and 15%, respectively.

  4. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  5. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Verrière, M.; Dubray, N.; Schunck, N.

    2016-03-01

    We describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in N-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank-Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.

  6. Diabatization for Time-Dependent Density Functional Theory: Exciton Transfers and Related Conical Intersections.

    PubMed

    Tamura, Hiroyuki

    2016-11-23

    Intermolecular exciton transfers and related conical intersections are analyzed by diabatization for time-dependent density functional theory. The diabatic states are expressed as a linear combination of the adiabatic states so as to emulate the well-defined reference states. The singlet exciton coupling calculated by the diabatization scheme includes contributions from the Coulomb (Förster) and electron exchange (Dexter) couplings. For triplet exciton transfers, the Dexter coupling, charge transfer integral, and diabatic potentials of stacked molecules are calculated for analyzing direct and superexchange pathways. We discuss some topologies of molecular aggregates that induce conical intersections on the vanishing points of the exciton coupling, namely boundary of H- and J-aggregates and T-shape aggregates, as well as canceled exciton coupling to the bright state of H-aggregate, i.e., selective exciton transfer to the dark state. The diabatization scheme automatically accounts for the Berry phase by fixing the signs of reference states while scanning the coordinates.

  7. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  8. Time-Dependent Thomas-Fermi Approach for Electron Dynamics in Metal Clusters

    NASA Astrophysics Data System (ADS)

    Domps, A.; Reinhard, P.-G.; Suraud, E.

    1998-06-01

    We propose a time-dependent Thomas-Fermi approach to the (nonlinear) dynamics of many-fermion systems. The approach relies on a hydrodynamical picture describing the system in terms of collective flow. We investigate in particular an application to electron dynamics in metal clusters. We make extensive comparisons with fully fledged quantal dynamical calculations and find overall good agreement. The approach thus provides a reliable and inexpensive scheme to study the electronic response of large metal clusters.

  9. Conservative and bounded volume-of-fluid advection on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ivey, Christopher B.; Moin, Parviz

    2017-12-01

    This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.

  10. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  11. VNAP2: A Computer Program for Computation of Two-dimensional, Time-dependent, Compressible, Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Cline, M. C.

    1981-01-01

    A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  12. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE PAGES

    Regnier, D.; Verriere, M.; Dubray, N.; ...

    2015-11-30

    In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle amore » realistic calculation of fission dynamics.« less

  13. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE PAGES

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-10-01

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  14. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less

  15. Scheme dependence and transverse momentum distribution interpretation of Collins-Soper-Sterman resummation

    NASA Astrophysics Data System (ADS)

    Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2015-11-01

    Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.

  16. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  17. An efficient method for quantum transport simulations in the time domain

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Yam, C.-Y.; Frauenheim, Th.; Chen, G. H.; Niehaus, T. A.

    2011-11-01

    An approximate method based on adiabatic time dependent density functional theory (TDDFT) is presented, that allows for the description of the electron dynamics in nanoscale junctions under arbitrary time dependent external potentials. The density matrix of the device region is propagated according to the Liouville-von Neumann equation. The semi-infinite leads give rise to dissipative terms in the equation of motion which are calculated from first principles in the wide band limit. In contrast to earlier ab initio implementations of this formalism, the Hamiltonian is here approximated in the spirit of the density functional based tight-binding (DFTB) method. Results are presented for two prototypical molecular devices and compared to full TDDFT calculations. The temporal profile of the current traces is qualitatively well captured by the DFTB scheme. Steady state currents show considerable variations, both in comparison of approximate and full TDDFT, but also among TDDFT calculations with different basis sets.

  18. The implementation of reverse Kessler warm rain scheme for radar reflectivity assimilation using a nudging approach in New Zealand

    NASA Astrophysics Data System (ADS)

    Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke

    2014-05-01

    Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.

  19. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  20. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  1. Numerical solution of nonlinear partial differential equations of mixed type. [finite difference approximation

    NASA Technical Reports Server (NTRS)

    Jameson, A.

    1976-01-01

    A review is presented of some recently developed numerical methods for the solution of nonlinear equations of mixed type. The methods considered use finite difference approximations to the differential equation. Central difference formulas are employed in the subsonic zone and upwind difference formulas are used in the supersonic zone. The relaxation method for the small disturbance equation is discussed and a description is given of difference schemes for the potential flow equation in quasi-linear form. Attention is also given to difference schemes for the potential flow equation in conservation form, the analysis of relaxation schemes by the time dependent analogy, the accelerated iterative method, and three-dimensional calculations.

  2. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  3. A two-dimensional, time-dependent model of suspended sediment transport and bed reworking for continental shelves

    USGS Publications Warehouse

    Harris, C.K.; Wiberg, P.L.

    2001-01-01

    A two-dimensional, time-dependent solution to the transport equation is formulated to account for advection and diffusion of sediment suspended in the bottom boundary layer of continental shelves. This model utilizes a semi-implicit, upwind-differencing scheme to solve the advection-diffusion equation across a two-dimensional transect that is configured so that one dimension is the vertical, and the other is a horizontal dimension usually aligned perpendicular to shelf bathymetry. The model calculates suspended sediment concentration and flux; and requires as input wave properties, current velocities, sediment size distributions, and hydrodynamic sediment properties. From the calculated two-dimensional suspended sediment fluxes, we quantify the redistribution of shelf sediment, bed erosion, and deposition for several sediment sizes during resuspension events. The two-dimensional, time-dependent approach directly accounts for cross-shelf gradients in bed shear stress and sediment properties, as well as transport that occurs before steady-state suspended sediment concentrations have been attained. By including the vertical dimension in the calculations, we avoid depth-averaging suspended sediment concentrations and fluxes, and directly account for differences in transport rates and directions for fine and coarse sediment in the bottom boundary layer. A flux condition is used as the bottom boundary condition for the transport equation in order to capture time-dependence of the suspended sediment field. Model calculations demonstrate the significance of both time-dependent and spatial terms on transport and depositional patterns on continental shelves. ?? 2001 Elsevier Science Ltd. All rights reserved.

  4. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    PubMed

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  5. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250

  6. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  7. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  8. Lattice effects of surface cell: Multilayer multiconfiguration time-dependent Hartree study on surface scattering of CO/Cu(100)

    NASA Astrophysics Data System (ADS)

    Meng, Qingyong; Meyer, Hans-Dieter

    2017-05-01

    To study the scattering of CO off a movable Cu(100) surface, extensive multilayer multiconfiguration time-dependent Hartree (ML-MCTDH) calculations are performed based on the SAP [R. Marquardt et al., J. Chem. Phys. 132, 074108 (2010)] potential energy surface in conjunction with a recently developed expansion model [Q. Meng and H.-D. Meyer, J. Chem. Phys. 143, 164310 (2015)] for including lattice motion. The surface vibration potential is constructed by a sum of Morse potentials where the parameters are determined by simulating the vibrational energies of a clean Cu(100) surface. Having constructed the total Hamiltonian, extensive dynamical calculations in both time-independent and time-dependent schemes are performed. Two-layer MCTDH (i.e., normal MCTDH) block-improved-relaxations (time-independent scheme) show that increasing the number of included surface vibrational dimensions lets the vibrational energies of CO/Cu(100) decrease for the frustrated translation (T mode), which is of low energy but increase those of the frustrated rotation (R mode) and the CO-Cu stretch (S mode), whose vibrational energies are larger than the energies of the in-plane surface vibrations (˜79 cm-1). This energy-shifting behavior was predicted and discussed by a simple model in our previous publication [Q. Meng and H.-D. Meyer, J. Chem. Phys. 143, 164310 (2015)]. By the flux analysis of the MCTDH/ML-MCTDH propagated wave packets, we calculated the sticking probabilities for the X + 0D, X + 1D, X + 3D, X + 5D, and X + 15D systems, where "X" stands for the used dimensionality of the CO/rigid-surface system and the second entry denotes the number of surface degrees of freedom included. From these sticking probabilities, the X + 5D/15D calculations predict a slower decrease of sticking with increasing energy as compared to the sticking of the X + 0D/1D/3D calculations. This is because the translational energy of CO is more easily transferred to surface vibrations, when the vibrational dimensionality of the surface is enlarged.

  9. Solution of 3-dimensional time-dependent viscous flows. Part 3: Application to turbulent and unsteady flows

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1982-01-01

    A numerical scheme is developed for solving the time dependent, three dimensional compressible viscous flow equations to be used as an aid in the design of helicopter rotors. In order to further investigate the numerical procedure, the computer code developed to solve an approximate form of the three dimensional unsteady Navier-Stokes equations employing a linearized block implicit technique in conjunction with a QR operator scheme is tested. Results of calculations are presented for several two dimensional boundary layer flows including steady turbulent and unsteady laminar cases. A comparison of fourth order and second order solutions indicate that increased accuracy can be obtained without any significant increases in cost (run time). The results of the computations also indicate that the computer code can be applied to more complex flows such as those encountered on rotating airfoils. The geometry of a symmetric NACA four digit airfoil is considered and the appropriate geometrical properties are computed.

  10. Spectral-based propagation schemes for time-dependent quantum systems with application to carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Chen, Zuojing; Polizzi, Eric

    2010-11-01

    Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.

  11. Computational scheme for pH-dependent binding free energy calculation with explicit solvent.

    PubMed

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R

    2016-01-01

    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.

  12. A local framework for calculating coupled cluster singles and doubles excitation energies (LoFEx-CCSD)

    DOE PAGES

    Baudin, Pablo; Bykov, Dmytro; Liakh, Dmitry I.; ...

    2017-02-22

    Here, the recently developed Local Framework for calculating Excitation energies (LoFEx) is extended to the coupled cluster singles and doubles (CCSD) model. In the new scheme, a standard CCSD excitation energy calculation is carried out within a reduced excitation orbital space (XOS), which is composed of localised molecular orbitals and natural transition orbitals determined from time-dependent Hartree–Fock theory. The presented algorithm uses a series of reduced second-order approximate coupled cluster singles and doubles (CC2) calculations to optimise the XOS in a black-box manner. This ensures that the requested CCSD excitation energies have been determined to a predefined accuracy compared tomore » a conventional CCSD calculation. We present numerical LoFEx-CCSD results for a set of medium-sized organic molecules, which illustrate the black-box nature of the approach and the computational savings obtained for transitions that are local compared to the size of the molecule. In fact, for such local transitions, the LoFEx-CCSD scheme can be applied to molecular systems where a conventional CCSD implementation is intractable.« less

  13. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  14. Gyrokinetic Magnetohydrodynamics and the Associated Equilibrium

    NASA Astrophysics Data System (ADS)

    Lee, W. W.; Hudson, S. R.; Ma, C. H.

    2017-10-01

    A proposed scheme for the calculations of gyrokinetic MHD and its associated equilibrium is discussed related a recent paper on the subject. The scheme is based on the time-dependent gyrokinetic vorticity equation and parallel Ohm's law, as well as the associated gyrokinetic Ampere's law. This set of equations, in terms of the electrostatic potential, ϕ, and the vector potential, ϕ , supports both spatially varying perpendicular and parallel pressure gradients and their associated currents. The MHD equilibrium can be reached when ϕ -> 0 and A becomes constant in time, which, in turn, gives ∇ . (J|| +J⊥) = 0 and the associated magnetic islands. Examples in simple cylindrical geometry will be given. The present work is partially supported by US DoE Grant DE-AC02-09CH11466.

  15. γ5 in the four-dimensional helicity scheme

    NASA Astrophysics Data System (ADS)

    Gnendiger, C.; Signer, A.

    2018-05-01

    We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.

  16. Time-Dependent Parabolic Finite Difference Formulation for Harmonic Sound Propagation in a Two-Dimensional Duct with Flow

    NASA Technical Reports Server (NTRS)

    Kreider, Kevin L.; Baumeister, Kenneth J.

    1996-01-01

    An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  17. Implementation of an approximate self-energy correction scheme in the orthogonalized linear combination of atomic orbitals method of band-structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Z.; Ching, W.Y.

    Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g

  18. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  19. Optimized effective potential in real time: Problems and prospects in time-dependent density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mundt, Michael; Kuemmel, Stephan

    2006-08-15

    The integral equation for the time-dependent optimized effective potential (TDOEP) in time-dependent density-functional theory is transformed into a set of partial-differential equations. These equations only involve occupied Kohn-Sham orbitals and orbital shifts resulting from the difference between the exchange-correlation potential and the orbital-dependent potential. Due to the success of an analog scheme in the static case, a scheme that propagates orbitals and orbital shifts in real time is a natural candidate for an exact solution of the TDOEP equation. We investigate the numerical stability of such a scheme. An approximation beyond the Krieger-Li-Iafrate approximation for the time-dependent exchange-correlation potential ismore » analyzed.« less

  20. Nonlinear calculations of the time evolution of black hole accretion disks

    NASA Technical Reports Server (NTRS)

    Luo, C.

    1994-01-01

    Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).

  1. COMPARISON OF IMPLICIT SCHEMES TO SOLVE EQUATIONS OF RADIATION HYDRODYNAMICS WITH A FLUX-LIMITED DIFFUSION APPROXIMATION: NEWTON–RAPHSON, OPERATOR SPLITTING, AND LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp

    Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less

  2. Effect of Pulse Shape on Spall Strength

    NASA Astrophysics Data System (ADS)

    Smirnov, V. I.; Petrov, Yu. V.

    2018-03-01

    This paper analyzes the effect of the time-dependent shape of a load pulse on the spall strength of materials. Within the framework of a classical one-dimensional scheme, triangular pulses with signal rise and decay portions and with no signal rise portions considered. Calculation results for the threshold characteristics of fracture for rail steel are given. The possibility of optimization of fracture by selecting a loading time with the use of an introduced characteristic of dynamic strength (pulse fracture capacity) is demonstrated. The study is carried out using a structure-time fracture criterion.

  3. Wavepacket dynamics and the multi-configurational time-dependent Hartree approach

    NASA Astrophysics Data System (ADS)

    Manthe, Uwe

    2017-06-01

    Multi-configurational time-dependent Hartree (MCTDH) based approaches are efficient, accurate, and versatile methods for high-dimensional quantum dynamics simulations. Applications range from detailed investigations of polyatomic reaction processes in the gas phase to high-dimensional simulations studying the dynamics of condensed phase systems described by typical solid state physics model Hamiltonians. The present article presents an overview of the different areas of application and provides a comprehensive review of the underlying theory. The concepts and guiding ideas underlying the MCTDH approach and its multi-mode and multi-layer extensions are discussed in detail. The general structure of the equations of motion is highlighted. The representation of the Hamiltonian and the correlated discrete variable representation (CDVR), which provides an efficient multi-dimensional quadrature in MCTDH calculations, are discussed. Methods which facilitate the calculation of eigenstates, the evaluation of correlation functions, and the efficient representation of thermal ensembles in MCTDH calculations are described. Different schemes for the treatment of indistinguishable particles in MCTDH calculations and recent developments towards a unified multi-layer MCTDH theory for systems including bosons and fermions are discussed.

  4. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  5. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  6. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  7. SWIFT: SPH With Inter-dependent Fine-grained Tasking

    NASA Astrophysics Data System (ADS)

    Schaller, Matthieu; Gonnet, Pedro; Chalk, Aidan B. G.; Draper, Peter W.

    2018-05-01

    SWIFT runs cosmological simulations on peta-scale machines for solving gravity and SPH. It uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles, combining these with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles. Many useful external potentials are also available, such as galaxy haloes or stratified boxes that are used in idealised problems. SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame; equations of state of dark-energy evolve with scale-factor. The structure of the code allows implementation for modified-gravity solvers or self-interacting dark matter schemes to be implemented. Many hydrodynamics schemes are implemented in SWIFT and the software allows users to add their own.

  8. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    NASA Astrophysics Data System (ADS)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  9. The coupled three-dimensional wave packet approach to reactive scattering

    NASA Astrophysics Data System (ADS)

    Marković, Nikola; Billing, Gert D.

    1994-01-01

    A recently developed scheme for time-dependent reactive scattering calculations using three-dimensional wave packets is applied to the D+H2 system. The present method is an extension of a previously published semiclassical formulation of the scattering problem and is based on the use of hyperspherical coordinates. The convergence requirements are investigated by detailed calculations for total angular momentum J equal to zero and the general applicability of the method is demonstrated by solving the J=1 problem. The inclusion of the geometric phase is also discussed and its effect on the reaction probability is demonstrated.

  10. Plume trajectory formation under stack tip self-enveloping

    NASA Astrophysics Data System (ADS)

    Gribkov, A. M.; Zroichikov, N. A.; Prokhorov, V. B.

    2017-10-01

    The phenomenon of stack tip self-enveloping and its influence upon the conditions of plume formation and on the trajectory of its motion are considered. Processes are described occurring in the initial part of the plume while the interaction between vertically directed flue gases outflowing from the stack and a horizontally directed moving air flow at high wind velocities that lead to the formation of a flag-like plume. Conditions responsible for the origin and evolution of interaction between these flows are demonstrated. For the first time, a plume formed under these conditions without bifurcation is registered. A photo image thereof is presented. A scheme for the calculation of the motion of a plume trajectory is proposed, the quantitative characteristics of which are obtained based on field observations. The wind velocity and direction, air temperature, and atmospheric turbulence at the level of the initial part of the trajectory have been obtained based on data obtained from an automatic meteorological system (mounted on the outer parts of a 250 m high stack no. 1 at the Naberezhnye Chelny TEPP plant) as well as based on the results of photographing and theodolite sighting of smoke puffs' trajectory taking into account their velocity within its initial part. The calculation scheme is supplemented with a new acting force—the force of self-enveloping. Based on the comparison of the new calculation scheme with the previous one, a significant contribution of this force to the development of the trajectory is revealed. A comparison of the natural full-scale data with the results of the calculation according to the proposed new scheme is made. The proposed calculation scheme has allowed us to extend the application of the existing technique to the range of high wind velocities. This approach would make it possible to simulate and investigate the trajectory and full rising height of the calculated the length above the mouth of flue-pipes, depending on various modal and meteorological parameters under the interrelation between the dynamic and thermal components of the rise as well as to obtain a universal calculation expression for determining the height of the plume rise for different classes of atmospheric stability.

  11. Systolic MOLLI T1 mapping with heart-rate-dependent pulse sequence sampling scheme is feasible in patients with atrial fibrillation.

    PubMed

    Zhao, Lei; Li, Songnan; Ma, Xiaohai; Greiser, Andreas; Zhang, Tianjing; An, Jing; Bai, Rong; Dong, Jianzeng; Fan, Zhanming

    2016-03-15

    T1 mapping enables assessment of myocardial characteristics. As the most common type of arrhythmia, atrial fibrillation (AF) is often accompanied by a variety of cardiac pathologies, whereby the irregular and usually rapid ventricle rate of AF may cause inaccurate T1 estimation due to mis-triggering and inadequate magnetization recovery. We hypothesized that systolic T1 mapping with a heart-rate-dependent (HRD) pulse sequence scheme may overcome this issue. 30 patients with AF and 13 healthy volunteers were enrolled and underwent cardiovascular magnetic resonance (CMR) at 3 T. CMR was repeated for 3 patients after electric cardioversion and for 2 volunteers after lowering heart rate (HR). A Modified Look-Locker Inversion Recovery (MOLLI) sequence was acquired before and 15 min after administration of 0.1 mmol/kg gadopentetate dimeglumine. For AF patients, both the fixed 5(3)3/4(1)3(1)2 and the HRD sampling scheme were performed at diastole and systole, respectively. The HRD pulse sequence sampling scheme was 5(n)3/4(n)3(n)2, where n was determined by the heart rate to ensure adequate magnetization recovery. Image quality of T1 maps was assessed. T1 times were measured in myocardium and blood. Extracellular volume fraction (ECV) was calculated. In volunteers with repeated T1 mapping, the myocardial native T1 and ECV generated from the 1st fixed sampling scheme were smaller than from the 1st HRD and 2nd fixed sampling scheme. In healthy volunteers, the overall native T1 times and ECV of the left ventricle (LV) in diastolic T1 maps were greater than in systolic T1 maps (P < 0.01, P < 0.05). In the 3 AF patients that had received electrical cardioversion therapy, the myocardial native T1 times and ECV generated from the fixed sampling scheme were smaller than in the 1st and 2nd HRD sampling scheme (all P < 0.05). In patients with AF (HR: 88 ± 20 bpm, HR fluctuation: 12 ± 9 bpm), more T1 maps with artifact were found in diastole than in systole (P < 0.01). The overall native T1 times and ECV of the left ventricle (LV) in diastolic T1 maps were greater than systolic T1 maps, either with fixed or HRD sampling scheme (all P < 0.05). Systolic MOLLI T1 mapping with heart-rate-dependent pulse sequence scheme can improve image quality and avoid T1 underestimation. It is feasible and with further validation may extend clinical applicability of T1 mapping to patients with atrial fibrillation.

  12. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.

    2018-04-01

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).

  13. Anharmonic, dimensionality and size effects in phonon transport

    NASA Astrophysics Data System (ADS)

    Thomas, Iorwerth O.; Srivastava, G. P.

    2017-12-01

    We have developed and employed a numerically efficient semi- ab initio theory, based on density-functional and relaxation-time schemes, to examine anharmonic, dimensionality and size effects in phonon transport in three- and two-dimensional solids of different crystal symmetries. Our method uses third- and fourth-order terms in crystal Hamiltonian expressed in terms of a temperature-dependent Grüneisen’s constant. All input to numerical calculations are generated from phonon calculations based on the density-functional perturbation theory. It is found that four-phonon processes make important and measurable contribution to lattice thermal resistivity above the Debye temperature. From our numerical results for bulk Si, bulk Ge, bulk MoS2 and monolayer MoS2 we find that the sample length dependence of phonon conductivity is significantly stronger in low-dimensional solids.

  14. GHI calculation sensitivity on microphysics, land- and cumulus parameterization in WRF over the Reunion Island

    NASA Astrophysics Data System (ADS)

    De Meij, A.; Vinuesa, J.-F.; Maupas, V.

    2018-05-01

    The sensitivity of different microphysics and dynamics schemes on calculated global horizontal irradiation (GHI) values in the Weather Research Forecasting (WRF) model is studied. 13 sensitivity simulations were performed for which the microphysics, cumulus parameterization schemes and land surface models were changed. Firstly we evaluated the model's performance by comparing calculated GHI values for the Base Case with observations for the Reunion Island for 2014. In general, the model calculates the largest bias during the austral summer. This indicates that the model is less accurate in timing the formation and dissipation of clouds during the summer, when higher water vapor quantities are present in the atmosphere than during the austral winter. Secondly, the model sensitivity on changing the microphysics, cumulus parameterization and land surface models on calculated GHI values is evaluated. The sensitivity simulations showed that changing the microphysics from the Thompson scheme (or Single-Moment 6-class scheme) to the Morrison double-moment scheme, the relative bias improves from 45% to 10%. The underlying reason for this improvement is that the Morrison double-moment scheme predicts the mass and number concentrations of five hydrometeors, which help to improve the calculation of the densities, size and lifetime of the cloud droplets. While the single moment schemes only predicts the mass for less hydrometeors. Changing the cumulus parameterization schemes and land surface models does not have a large impact on GHI calculations.

  15. A numerical scheme to calculate temperature and salinity dependent air-water transfer velocities for any gas

    NASA Astrophysics Data System (ADS)

    Johnson, M. T.

    2010-10-01

    The ocean-atmosphere flux of a gas can be calculated from its measured or estimated concentration gradient across the air-sea interface and the transfer velocity (a term representing the conductivity of the layers either side of the interface with respect to the gas of interest). Traditionally the transfer velocity has been estimated from empirical relationships with wind speed, and then scaled by the Schmidt number of the gas being transferred. Complex, physically based models of transfer velocity (based on more physical forcings than wind speed alone), such as the NOAA COARE algorithm, have more recently been applied to well-studied gases such as carbon dioxide and DMS (although many studies still use the simpler approach for these gases), but there is a lack of validation of such schemes for other, more poorly studied gases. The aim of this paper is to provide a flexible numerical scheme which will allow the estimation of transfer velocity for any gas as a function of wind speed, temperature and salinity, given data on the solubility and liquid molar volume of the particular gas. New and existing parameterizations (including a novel empirical parameterization of the salinity-dependence of Henry's law solubility) are brought together into a scheme implemented as a modular, extensible program in the R computing environment which is available in the supplementary online material accompanying this paper; along with input files containing solubility and structural data for ~90 gases of general interest, enabling the calculation of their total transfer velocities and component parameters. Comparison of the scheme presented here with alternative schemes and methods for calculating air-sea flux parameters shows good agreement in general. It is intended that the various components of this numerical scheme should be applied only in the absence of experimental data providing robust values for parameters for a particular gas of interest.

  16. Dynamic control of droplets and pockets formation in homogeneous porous media immiscible displacements

    NASA Astrophysics Data System (ADS)

    Lins, T. F.; Azaiez, J.

    2018-03-01

    Interfacial instabilities of immiscible two-phase radial flow displacements in homogeneous porous media are analyzed for constant and time-dependent sinusoidal cyclic injection schemes. The analysis is carried out through numerical simulations based on the immersed interface and level set methods. The effects of the fluid properties and the injection flow parameters, namely, the period and the amplitude, on the formation of droplets and pockets are analyzed. It was found that larger capillary numbers or smaller viscosity ratios lead to more droplets/pockets that tend to appear earlier in time. Furthermore, the period and amplitude of the cyclic schemes were found to have a strong effect on droplets/pockets formations, and depending on their values, these can be enhanced or attenuated. In particular, the results revealed that there is a critical amplitude above which droplets and pockets formation is suppressed up to a specified time. This critical amplitude depends on the fluid properties, namely, the viscosity ratio and surface tension as well as on the period of the time-dependent scheme. The results of this study indicate that it is possible to use time-dependent cyclic schemes to control the formation and development of droplets/pockets in the flow and in particular to delay their appearance through an appropriate combination of the displacement scheme's amplitude and period.

  17. Calculations of 3D compressible flows using an efficient low diffusion upwind scheme

    NASA Astrophysics Data System (ADS)

    Hu, Zongjun; Zha, Gecheng

    2005-01-01

    A newly suggested E-CUSP upwind scheme is employed for the first time to calculate 3D flows of propulsion systems. The E-CUSP scheme contains the total energy in the convective vector and is fully consistent with the characteristic directions. The scheme is proved to have low diffusion and high CPU efficiency. The computed cases in this paper include a transonic nozzle with circular-to-rectangular cross-section, a transonic duct with shock wave/turbulent boundary layer interaction, and a subsonic 3D compressor cascade. The computed results agree well with the experiments. The new scheme is proved to be accurate, efficient and robust for the 3D calculations of the flows in this paper.

  18. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  19. A NUMERICAL SCHEME FOR SPECIAL RELATIVISTIC RADIATION MAGNETOHYDRODYNAMICS BASED ON SOLVING THE TIME-DEPENDENT RADIATIVE TRANSFER EQUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohsuga, Ken; Takahashi, Hiroyuki R.

    2016-02-20

    We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less

  20. An interactive ocean surface albedo scheme (OSAv1.0): formulation and evaluation in ARPEGE-Climat (V6.1) and LMDZ (V5A)

    NASA Astrophysics Data System (ADS)

    Séférian, Roland; Baek, Sunghye; Boucher, Olivier; Dufresne, Jean-Louis; Decharme, Bertrand; Saint-Martin, David; Roehrig, Romain

    2018-01-01

    Ocean surface represents roughly 70 % of the Earth's surface, playing a large role in the partitioning of the energy flow within the climate system. The ocean surface albedo (OSA) is an important parameter in this partitioning because it governs the amount of energy penetrating into the ocean or reflected towards space. The old OSA schemes in the ARPEGE-Climat and LMDZ models only resolve the latitudinal dependence in an ad hoc way without an accurate representation of the solar zenith angle dependence. Here, we propose a new interactive OSA scheme suited for Earth system models, which enables coupling between Earth system model components like surface ocean waves and marine biogeochemistry. This scheme resolves spectrally the various contributions of the surface for direct and diffuse solar radiation. The implementation of this scheme in two Earth system models leads to substantial improvements in simulated OSA. At the local scale, models using the interactive OSA scheme better replicate the day-to-day distribution of OSA derived from ground-based observations in contrast to old schemes. At global scale, the improved representation of OSA for diffuse radiation reduces model biases by up to 80 % over the tropical oceans, reducing annual-mean model-data error in surface upwelling shortwave radiation by up to 7 W m-2 over this domain. The spatial correlation coefficient between modeled and observed OSA at monthly resolution has been increased from 0.1 to 0.8. Despite its complexity, this interactive OSA scheme is computationally efficient for enabling precise OSA calculation without penalizing the elapsed model time.

  1. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  2. Analytic calculations of anharmonic infrared and Raman vibrational spectra

    PubMed Central

    Louant, Orian; Ruud, Kenneth

    2016-01-01

    Using a recently developed recursive scheme for the calculation of high-order geometric derivatives of frequency-dependent molecular properties [Ringholm et al., J. Comp. Chem., 2014, 35, 622], we present the first analytic calculations of anharmonic infrared (IR) and Raman spectra including anharmonicity both in the vibrational frequencies and in the IR and Raman intensities. In the case of anharmonic corrections to the Raman intensities, this involves the calculation of fifth-order energy derivatives—that is, the third-order geometric derivatives of the frequency-dependent polarizability. The approach is applicable to both Hartree–Fock and Kohn–Sham density functional theory. Using generalized vibrational perturbation theory to second order, we have calculated the anharmonic infrared and Raman spectra of the non- and partially deuterated isotopomers of nitromethane, where the inclusion of anharmonic effects introduces combination and overtone bands that are observed in the experimental spectra. For the major features of the spectra, the inclusion of anharmonicities in the calculation of the vibrational frequencies is more important than anharmonic effects in the calculated infrared and Raman intensities. Using methanimine as a trial system, we demonstrate that the analytic approach avoids errors in the calculated spectra that may arise if numerical differentiation schemes are used. PMID:26784673

  3. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regnier, D.; Dubray, N.; Verriere, M.

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  4. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE PAGES

    Regnier, D.; Dubray, N.; Verriere, M.; ...

    2017-12-20

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  5. Resonant Raman spectra of diindenoperylene thin films

    NASA Astrophysics Data System (ADS)

    Scholz, R.; Gisslén, L.; Schuster, B.-E.; Casu, M. B.; Chassé, T.; Heinemeyer, U.; Schreiber, F.

    2011-01-01

    Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A_g-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.

  6. Resonant Raman spectra of diindenoperylene thin films.

    PubMed

    Scholz, R; Gisslén, L; Schuster, B-E; Casu, M B; Chassé, T; Heinemeyer, U; Schreiber, F

    2011-01-07

    Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A(g)-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.

  7. The constant displacement scheme for tracking particles in heterogeneous aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less

  8. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  9. Parallel computation of fluid-structural interactions using high resolution upwind schemes

    NASA Astrophysics Data System (ADS)

    Hu, Zongjun

    An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different frequencies and incidence angles. The Zha CUSP schemes are the first time to be applied in moving grid systems and 2D and 3D calculations. The implicit Gauss-Seidel iteration with dual time stepping is the first time to be used for moving grid systems. The NASA flutter cascade is the first time to be calculated in full scale.

  10. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.

  11. Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de

    2016-02-07

    We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less

  12. Parametrization of Combined Quantum Mechanical and Molecular Mechanical Methods: Bond-Tuned Link Atoms.

    PubMed

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-05-30

    Combined quantum mechanical and molecular mechanical (QM/MM) methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM-MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM-MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM-MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.

  13. A Navier-Stokes solution of the three-dimensional viscous compressible flow in a centrifugal compressor impeller

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.

    1977-01-01

    A two-dimensional time-dependent computer code was utilized to calculate the three-dimensional steady flow within the impeller blading. The numerical method is an explicit time marching scheme in two spatial dimensions. Initially, an inviscid solution is generated on the hub blade-to-blade surface by the method of Katsanis and McNally (1973). Starting with the known inviscid solution, the viscous effects are calculated through iteration. The approach makes it possible to take into account principal impeller fluid-mechanical effects. It is pointed out that the second iterate provides a complete solution to the three-dimensional, compressible, Navier-Stokes equations for flow in a centrifugal impeller. The problems investigated are related to the study of a radial impeller and a backswept impeller.

  14. Benchmarking the Bethe–Salpeter Formalism on a Standard Organic Molecular Set

    PubMed Central

    2015-01-01

    We perform benchmark calculations of the Bethe–Salpeter vertical excitation energies for the set of 28 molecules constituting the well-known Thiel’s set, complemented by a series of small molecules representative of the dye chemistry field. We show that Bethe–Salpeter calculations based on a molecular orbital energy spectrum obtained with non-self-consistent G0W0 calculations starting from semilocal DFT functionals dramatically underestimate the transition energies. Starting from the popular PBE0 hybrid functional significantly improves the results even though this leads to an average −0.59 eV redshift compared to reference calculations for Thiel’s set. It is shown, however, that a simple self-consistent scheme at the GW level, with an update of the quasiparticle energies, not only leads to a much better agreement with reference values, but also significantly reduces the impact of the starting DFT functional. On average, the Bethe–Salpeter scheme based on self-consistent GW calculations comes close to the best time-dependent DFT calculations with the PBE0 functional with a 0.98 correlation coefficient and a 0.18 (0.25) eV mean absolute deviation compared to TD-PBE0 (theoretical best estimates) with a tendency to be red-shifted. We also observe that TD-DFT and the standard adiabatic Bethe–Salpeter implementation may differ significantly for states implying a large multiple excitation character. PMID:26207104

  15. Incorporation of Three-dimensional Radiative Transfer into a Very High Resolution Simulation of Horizontally Inhomogeneous Clouds

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.

    2016-12-01

    A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.

  16. Potential Energy Surface for Large Barrierless Reaction Systems: Application to the Kinetic Calculations of the Dissociation of Alkanes and the Reverse Recombination Reactions.

    PubMed

    Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan

    2018-05-31

    The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.

  17. Solution of the one-dimensional consolidation theory equation with a pseudospectral method

    USGS Publications Warehouse

    Sepulveda, N.; ,

    1991-01-01

    The one-dimensional consolidation theory equation is solved for an aquifer system using a pseudospectral method. The spatial derivatives are computed using Fast Fourier Transforms and the time derivative is solved using a fourth-order Runge-Kutta scheme. The computer model calculates compaction based on the void ratio changes accumulated during the simulated periods of time. Compactions and expansions resulting from groundwater withdrawals and recharges are simulated for two observation wells in Santa Clara Valley and two in San Joaquin Valley, California. Field data previously published are used to obtain mean values for the soil grain density and the compression index and to generate depth-dependent profiles for hydraulic conductivity and initial void ratio. The water-level plots for the wells studied were digitized and used to obtain the time dependent profiles of effective stress.

  18. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 2: Unsteady ducted propfan analysis computer program users manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1991-01-01

    The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derrida, B.; Nadal, J.P.

    It is possible to construct diluted asymmetric models of neural networks for which the dynamics can be calculated exactly. The authors test several learning schemes, in particular, models for which the values of the synapses remain bounded and depend on the history. Our analytical results on the relative efficiencies of the various learning schemes are qualitatively similar to the corresponding ones obtained numerically on fully connected symmetric networks.

  20. Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi

    2018-03-01

    We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.

  1. Transonic cascade flow calculations using non-periodic C-type grids

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1991-01-01

    A new kind of C-type grid is proposed for turbomachinery flow calculations. This grid is nonperiodic on the wake and results in minimum skewness for cascades with high turning and large camber. Euler and Reynolds averaged Navier-Stokes equations are discretized on this type of grid using a finite volume approach. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. Jameson's explicit Runge-Kutta scheme is adopted for the integration in time, and computational efficiency is achieved through accelerating strategies such as multigriding and residual smoothing. A detailed numerical study was performed for a turbine rotor and for a vane. A grid dependence analysis is presented and the effect of artificial dissipation is also investigated. Comparison of calculations with experiments clearly demonstrates the advantage of the proposed grid.

  2. a Time-Dependent Many-Electron Approach to Atomic and Molecular Interactions

    NASA Astrophysics Data System (ADS)

    Runge, Keith

    A new methodology is developed for the description of electronic rearrangement in atomic and molecular collisions. Using the eikonal representation of the total wavefunction, time -dependent equations are derived for the electronic densities within the time-dependent Hartree-Fock approximation. An averaged effective potential which ensures time reversal invariance is used to describe the effect of the fast electronic transitions on the slower nuclear motions. Electron translation factors (ETF) are introduced to eliminate spurious asymptotic couplings, and a local ETF is incorporated into a basis of traveling atomic orbitals. A reference density is used to describe local electronic relaxation and to account for the time propagation of fast and slow motions, and is shown to lead to an efficient integration scheme. Expressions for time-dependent electronic populations and polarization parameters are given. Electronic integrals over Gaussians including ETFs are derived to extend electronic state calculations to dynamical phenomena. Results of the method are in good agreement with experimental data for charge transfer integral cross sections over a projectile energy range of three orders of magnitude in the proton-Hydrogen atom system. The more demanding calculations of integral alignment, state-to-state integral cross sections, and differential cross sections are found to agree well with experimental data provided care is taken to include ETFs in the calculation of electronic integrals and to choose the appropriate effective potential. The method is found to be in good agreement with experimental data for the calculation of charge transfer integral cross sections and state-to-state integral cross sections in the one-electron heteronuclear Helium(2+)-Hydrogen atom system and in the two-electron system, Hydrogen atom-Hydrogen atom. Time-dependent electronic populations are seen to oscillate rapidly in the midst of collision event. In particular, multiple exchanges of the electron are seen to occur in the proton-Hydrogen atom system at low collision energies. The concepts and results derived from the approach provide new insight into the dynamics of nuclear screening and electronic rearrangement in atomic collisions.

  3. Sivers and Boer-Mulders observables from lattice QCD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B.U. Musch, Ph. Hagler, M. Engelhardt, J.W. Negele, A. Schafer

    We present a first calculation of transverse momentum dependent nucleon observables in dynamical lattice QCD employing non-local operators with staple-shaped, 'process-dependent' Wilson lines. The use of staple-shaped Wilson lines allows us to link lattice simulations to TMD effects determined from experiment, and in particular to access non-universal, naively time-reversal odd TMD observables. We present and discuss results for the generalized Sivers and Boer-Mulders transverse momentum shifts for the SIDIS and DY cases. The effect of staple-shaped Wilson lines on T-even observables is studied for the generalized tensor charge and a generalized transverse shift related to the worm gear function g{submore » 1}T. We emphasize the dependence of these observables on the staple extent and the Collins-Soper evolution parameter. Our numerical calculations use an n{sub f} = 2+1 mixed action scheme with domain wall valence fermions on an Asqtad sea and pion masses 369 MeV as well as 518 MeV.« less

  4. Entanglement manipulation by a magnetic pulse in Gd3N@C80 endohedral metallofullerenes on a Cu(0 0 1) surface

    NASA Astrophysics Data System (ADS)

    Farberovich, Oleg V.; Gritzaenko, Vyacheslav S.

    2018-04-01

    In this paper we present the results of theoretical calculation of entanglement within a spin structure of Gd3N@C80 under the influence of rectangular impulses. Research is conducted using general spin Hamiltonian within SSNQ (spin system of N-qubits). The calculations of entanglement with various impulses are performed using the time-dependent Landau-Lifshitz-Gilbert equation with spin-spin correlation function. We show that long rectangular impulse (t = 850 ps) can be used for sustaining entanglement value. This allows us to offer a new algorithm which can be used to solve the problem of decoherence in the logical scheme optimization.

  5. Parametric spatiotemporal oscillation in reaction-diffusion systems.

    PubMed

    Ghosh, Shyamolina; Ray, Deb Shankar

    2016-03-01

    We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.

  6. Parametric spatiotemporal oscillation in reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Ghosh, Shyamolina; Ray, Deb Shankar

    2016-03-01

    We consider a reaction-diffusion system in a homogeneous stable steady state. On perturbation by a time-dependent sinusoidal forcing of a suitable scaling parameter the system exhibits parametric spatiotemporal instability beyond a critical threshold frequency. We have formulated a general scheme to calculate the threshold condition for oscillation and the range of unstable spatial modes lying within a V-shaped region reminiscent of Arnold's tongue. Full numerical simulations show that depending on the specificity of nonlinearity of the models, the instability may result in time-periodic stationary patterns in the form of standing clusters or spatially localized breathing patterns with characteristic wavelengths. Our theoretical analysis of the parametric oscillation in reaction-diffusion system is corroborated by full numerical simulation of two well-known chemical dynamical models: chlorite-iodine-malonic acid and Briggs-Rauscher reactions.

  7. The generalized scheme-independent Crewther relation in QCD

    NASA Astrophysics Data System (ADS)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.

    2017-07-01

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.

  8. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  9. Description of plasmon-like band in silver clusters: the importance of the long-range Hartree-Fock exchange in time-dependent density-functional theory simulations.

    PubMed

    Rabilloud, Franck

    2014-10-14

    Absorption spectra of Ag20 and Ag55(q) (q = +1, -3) nanoclusters are investigated in the framework of the time-dependent density functional theory in order to analyse the role of the d electrons in plasmon-like band of silver clusters. The description of the plasmon-like band from calculations using density functionals containing an amount of Hartree-Fock exchange at long range, namely, hybrid and range-separated hybrid (RSH) density functionals, is in good agreement with the classical interpretation of the plasmon-like structure as a collective excitation of valence s-electrons. In contrast, using local or semi-local exchange functionals (generalized gradient approximations (GGAs) or meta-GGAs) leads to a strong overestimation of the role of d electrons in the plasmon-like band. The semi-local asymptotically corrected model potentials also describe the plasmon as mainly associated to d electrons, though calculated spectra are in fairly good agreement with those calculated using the RSH scheme. Our analysis shows that a portion of non-local exchange modifies the description of the plasmon-like band.

  10. Quantum dynamics study of H+NH3-->H2+NH2 reaction.

    PubMed

    Zhang, Xu Qiang; Cui, Qian; Zhang, John Z H; Han, Ke Li

    2007-06-21

    We report in this paper a quantum dynamics study for the reaction H+NH3-->NH2+H2 on the potential energy surface of Corchado and Espinosa-Garcia [J. Chem. Phys. 106, 4013 (1997)]. The quantum dynamics calculation employs the semirigid vibrating rotor target model [J. Z. H. Zhang, J. Chem. Phys. 111, 3929 (1999)] and time-dependent wave packet method to propagate the wave function. Initial state-specific reaction probabilities are obtained, and an energy correction scheme is employed to account for zero point energy changes for the neglected degrees of freedom in the dynamics treatment. Tunneling effect is observed in the energy dependency of reaction probability, similar to those found in H+CH4 reaction. The influence of rovibrational excitation on reaction probability and stereodynamical effect are investigated. Reaction rate constants from the initial ground state are calculated and are compared to those from the transition state theory and experimental measurement.

  11. Spin-flip transitions and departure from the Rashba model in the Au(111) surface

    NASA Astrophysics Data System (ADS)

    Ibañez-Azpiroz, Julen; Bergara, Aitor; Sherman, E. Ya.; Eiguren, Asier

    2013-09-01

    We present a detailed analysis of the spin-flip excitations induced by a periodic time-dependent electric field in the Rashba prototype Au(111) noble metal surface. Our calculations incorporate the full spinor structure of the spin-split surface states and employ a Wannier-based scheme for the spin-flip matrix elements. We find that the spin-flip excitations associated with the surface states exhibit an strong dependence on the electron momentum magnitude, a feature that is absent in the standard Rashba model [E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960)]. Furthermore, we demonstrate that the maximum of the calculated spin-flip absorption rate is about twice the model prediction. These results show that, although the Rashba model accurately describes the spectrum and spin polarization, it does not fully account for the dynamical properties of the surface states.

  12. Sivers and Boer-Mulders observables from lattice QCD

    NASA Astrophysics Data System (ADS)

    Musch, B. U.; Hägler, Ph.; Engelhardt, M.; Negele, J. W.; Schäfer, A.

    2012-05-01

    We present a first calculation of transverse momentum-dependent nucleon observables in dynamical lattice QCD employing nonlocal operators with staple-shaped, “process-dependent” Wilson lines. The use of staple-shaped Wilson lines allows us to link lattice simulations to TMD effects determined from experiment, and, in particular, to access nonuniversal, naively time-reversal odd TMD observables. We present and discuss results for the generalized Sivers and Boer-Mulders transverse momentum shifts for the SIDIS and DY cases. The effect of staple-shaped Wilson lines on T-even observables is studied for the generalized tensor charge and a generalized transverse shift related to the worm-gear function g1T. We emphasize the dependence of these observables on the staple extent and the Collins-Soper evolution parameter. Our numerical calculations use an nf=2+1 mixed action scheme with domain wall valence fermions on an Asqtad sea and pion masses 369 MeV as well as 518 MeV.

  13. Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.

    2000-01-01

    The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.

  14. Viscous compressible flow direct and inverse computation and illustrations

    NASA Technical Reports Server (NTRS)

    Yang, T. T.; Ntone, F.

    1986-01-01

    An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.

  15. Optimum Adaptive Modulation and Channel Coding Scheme for Frequency Domain Channel-Dependent Scheduling in OFDM Based Evolved UTRA Downlink

    NASA Astrophysics Data System (ADS)

    Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao

    In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.

  16. Numerical experiments with a symmetric high-resolution shock-capturing scheme

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1986-01-01

    Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.

  17. QCD Resummation for Single Spin Asymmetries

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Xiao, Bo-Wen; Yuan, Feng

    2011-10-01

    We study the transverse momentum dependent factorization for single spin asymmetries in Drell-Yan and semi-inclusive deep inelastic scattering processes at one-loop order. The next-to-leading order hard factors are calculated in the Ji-Ma-Yuan factorization scheme. We further derive the QCD resummation formalisms for these observables following the Collins-Soper-Sterman method. The results are expressed in terms of the collinear correlation functions from initial and/or final state hadrons coupled with the Sudakov form factor containing all order soft-gluon resummation effects. The scheme-independent coefficients are calculated up to one-loop order.

  18. QCD Resummation for Single Spin Asymmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang Z.; Xiao, Bo-Wen; Yuan, Feng

    We study the transverse momentum dependent factorization for single spin asymmetries in Drell-Yan and semi-inclusive deep inelastic scattering processes at one-loop order. The next-to-leading order hard factors are calculated in the Ji-Ma-Yuan factorization scheme. We further derive the QCD resummation formalisms for these observables following the Collins-Soper-Sterman method. The results are expressed in terms of the collinear correlation functions from initial and/or final state hadrons coupled with the Sudakov form factor containing all order soft-gluon resummation effects. The scheme-independent coefficients are calculated up to one-loop order.

  19. Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei

    2016-09-01

    For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.

  20. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  1. Equilibration in the time-dependent Hartree-Fock approach probed with the Wigner distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.

    2011-09-15

    By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less

  2. Using the Chebychev expansion in quantum transport calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popescu, Bogdan; Rahman, Hasan; Kleinekathöfer, Ulrich, E-mail: u.kleinekathoefer@jacobs-university.de

    2015-04-21

    Irradiation by laser pulses and a fluctuating surrounding liquid environment can, for example, lead to time-dependent effects in the transport through molecular junctions. From the theoretical point of view, time-dependent theories of quantum transport are still challenging. In one of these existing transport theories, the energy-dependent coupling between molecule and leads is decomposed into Lorentzian functions. This trick has successfully been combined with quantum master approaches, hierarchical formalisms, and non-equilibrium Green’s functions. The drawback of this approach is, however, its serious limitation to certain forms of the molecule-lead coupling and to higher temperatures. Tian and Chen [J. Chem. Phys. 137,more » 204114 (2012)] recently employed a Chebychev expansion to circumvent some of these latter problems. Here, we report on a similar approach also based on the Chebychev expansion but leading to a different set of coupled differential equations using the fact that a derivative of a zeroth-order Bessel function can again be given in terms of Bessel functions. Test calculations show the excellent numerical accuracy and stability of the presented formalism. The time span for which this Chebychev expansion scheme is valid without any restrictions on the form of the spectral density or temperature can be determined a priori.« less

  3. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro, E-mail: stener@univ.trieste.it

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The methodmore » has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.« less

  4. Receiver design, performance analysis, and evaluation for space-borne laser altimeters and space-to-space laser ranging systems

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.

    1995-01-01

    Laser altimeters measure the time of flight of the laser pulses to determine the range of the target. The simplest altimeter receiver consists of a photodetector followed by a leading edge detector. A time interval unit (TIU) measures the time from the transmitted laser pulse to the leading edge of the received pulse as it crosses a preset threshold. However, the ranging error of this simple detection scheme depends on the received, pulse amplitude, pulse shape, and the threshold. In practice, the pulse shape and the amplitude are determined by the target target characteristics which has to be assumed unknown prior to the measurement. The ranging error can be improved if one also measures the pulse width and use the average of the leading and trailing edges (half pulse width) as the pulse arrival time. The ranging error becomes independent of the received pulse amplitude and the pulse width as long as the pulse shape is symmetric. The pulse width also gives the slope of the target. The ultimate detection scheme is to digitize the received waveform and calculate the centroid as the pulse arrival time. The centroid detection always gives unbiased measurement even for asymmetric pulses. In this report, we analyze the laser altimeter ranging errors for these three detection schemes using the Mars Orbital Laser Altimeter (MOLA) as an example.

  5. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  6. Oscillator strengths, first-order properties, and nuclear gradients for local ADC(2).

    PubMed

    Schütz, Martin

    2015-06-07

    We describe theory and implementation of oscillator strengths, orbital-relaxed first-order properties, and nuclear gradients for the local algebraic diagrammatic construction scheme through second order. The formalism is derived via time-dependent linear response theory based on a second-order unitary coupled cluster model. The implementation presented here is a modification of our previously developed algorithms for Laplace transform based local time-dependent coupled cluster linear response (CC2LR); the local approximations thus are state specific and adaptive. The symmetry of the Jacobian leads to considerable simplifications relative to the local CC2LR method; as a result, a gradient evaluation is about four times less expensive. Test calculations show that in geometry optimizations, usually very similar geometries are obtained as with the local CC2LR method (provided that a second-order method is applicable). As an exemplary application, we performed geometry optimizations on the low-lying singlet states of chlorophyllide a.

  7. A publicly available SSC+EC code.

    NASA Astrophysics Data System (ADS)

    Georganopoulos, M.; Perlman, E. S.; Kazanas, D.; Wingert, B.; Castro, R.

    2004-08-01

    We present a time-dependent one zone SSC+EC code that takes into account the KN-cross section, and calculates self-consistently all orders of Compton scattering. In particular, it produces separate results for the first order Compton component, and for the total Compton emission. The kinetic equation is solved using a stable implicit scheme, and the user can select from a range of physically motivated temporal electron injection profile. The code is written in C, is fully documented and will soon be publicly available through the Internet, along with a set of IDL visualization routines.

  8. Enabling an Integrated Rate-temporal Learning Scheme on Memristor

    NASA Astrophysics Data System (ADS)

    He, Wei; Huang, Kejie; Ning, Ning; Ramanathan, Kiruthika; Li, Guoqi; Jiang, Yu; Sze, Jiayin; Shi, Luping; Zhao, Rong; Pei, Jing

    2014-04-01

    Learning scheme is the key to the utilization of spike-based computation and the emulation of neural/synaptic behaviors toward realization of cognition. The biological observations reveal an integrated spike time- and spike rate-dependent plasticity as a function of presynaptic firing frequency. However, this integrated rate-temporal learning scheme has not been realized on any nano devices. In this paper, such scheme is successfully demonstrated on a memristor. Great robustness against the spiking rate fluctuation is achieved by waveform engineering with the aid of good analog properties exhibited by the iron oxide-based memristor. The spike-time-dependence plasticity (STDP) occurs at moderate presynaptic firing frequencies and spike-rate-dependence plasticity (SRDP) dominates other regions. This demonstration provides a novel approach in neural coding implementation, which facilitates the development of bio-inspired computing systems.

  9. Chemical transport models: the combined non-local diffusion and mixing schemes, and calculation of in-canopy resistance for dry deposition fluxes.

    PubMed

    Mihailovic, Dragutin T; Alapaty, Kiran; Podrascanin, Zorica

    2009-03-01

    Improving the parameterization of processes in the atmospheric boundary layer (ABL) and surface layer, in air quality and chemical transport models. To do so, an asymmetrical, convective, non-local scheme, with varying upward mixing rates is combined with the non-local, turbulent, kinetic energy scheme for vertical diffusion (COM). For designing it, a function depending on the dimensionless height to the power four in the ABL is suggested, which is empirically derived. Also, we suggested a new method for calculating the in-canopy resistance for dry deposition over a vegetated surface. The upward mixing rate forming the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. The vertical eddy diffusivity is parameterized using the mean turbulent velocity scale that is obtained by the vertical integration within the ABL. In-canopy resistance is calculated by integration of inverse turbulent transfer coefficient inside the canopy from the effective ground roughness length to the canopy source height and, further, from its the canopy height. This combination of schemes provides a less rapid mass transport out of surface layer into other layers, during convective and non-convective periods, than other local and non-local schemes parameterizing mixing processes in the ABL. The suggested method for calculating the in-canopy resistance for calculating the dry deposition over a vegetated surface differs remarkably from the commonly used one, particularly over forest vegetation. In this paper, we studied the performance of a non-local, turbulent, kinetic energy scheme for vertical diffusion combined with a non-local, convective mixing scheme with varying upward mixing in the atmospheric boundary layer (COM) and its impact on the concentration of pollutants calculated with chemical and air-quality models. In addition, this scheme was also compared with a commonly used, local, eddy-diffusivity scheme. Simulated concentrations of NO2 by the COM scheme and new parameterization of the in-canopy resistance are closer to the observations when compared to those obtained from using the local eddy-diffusivity scheme. Concentrations calculated with the COM scheme and new parameterization of in-canopy resistance, are in general higher and closer to the observations than those obtained by the local, eddy-diffusivity scheme (on the order of 15-22%). To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO2) were compared for the years 1999 and 2002. The comparison was made for the entire domain used in simulations performed by the chemical European Monitoring and Evaluation Program Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  10. Reduced Equations for Calculating the Combustion Rates of Jet-A and Methane Fuel

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2003-01-01

    Simplified kinetic schemes for Jet-A and methane fuels were developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) that is being developed at Glenn. These kinetic schemes presented here result in a correlation that gives the chemical kinetic time as a function of initial overall cell fuel/air ratio, pressure, and temperature. The correlations would then be used with the turbulent mixing times to determine the limiting properties and progress of the reaction. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentration of carbon monoxide as a function of fuel air ratio, pressure, and temperature. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates and the values obtained from the equilibrium correlations were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide, and NOx were obtained for both Jet-A fuel and methane.

  11. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  12. Time-dependent multi-dimensional simulation studies of the electron output scheme for high power FELs

    NASA Astrophysics Data System (ADS)

    Hahn, S. J.; Fawley, W. M.; Kim, K. J.; Edighoffer, J. A.

    1994-12-01

    The authors examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. The authors have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.

  13. Stable isomers and electronic, vibrational, and optical properties of WS2 nano-clusters: A first-principles study

    NASA Astrophysics Data System (ADS)

    Hafizi, Roohollah; Hashemifar, S. Javad; Alaei, Mojtaba; Jangrouei, MohammadReza; Akbarzadeh, Hadi

    2016-12-01

    In this paper, we employ an evolutionary algorithm along with the full-potential density functional theory (DFT) computations to perform a comprehensive search for the stable structures of stoichiometric (WS2)n nano-clusters (n = 1 - 9), within three different exchange-correlation functionals. Our results suggest that n = 5 and 8 are possible candidates for the low temperature magic sizes of WS2 nano-clusters while at temperatures above 500 Kelvin, n = 7 exhibits a comparable relative stability with n = 8. The electronic properties and energy gap of the lowest energy isomers were computed within several schemes, including semilocal Perdew-Burke-Ernzerhof and Becke-Lee-Yang-Parr functionals, hybrid B3LYP functional, many body based DFT+GW approach, ΔSCF method, and time dependent DFT calculations. Vibrational spectra of the lowest lying isomers, computed by the force constant method, are used to address IR spectra and thermal free energy of the clusters. Time dependent density functional calculation in a real time domain is applied to determine the full absorption spectra and optical gap of the lowest energy isomers of the WS2 nano-clusters.

  14. Time-dependent density functional theory for open systems with a positivity-preserving decomposition scheme for environment spectral functions

    NASA Astrophysics Data System (ADS)

    Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung

    2015-04-01

    Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.

  15. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  16. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  17. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  18. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  19. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  20. Comparison of different interpolation operators including nonlinear subdivision schemes in the simulation of particle trajectories

    NASA Astrophysics Data System (ADS)

    Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques

    2013-03-01

    In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.

  1. Stability of fluctuating and transient aggregates of amphiphilic solutes in aqueous binary mixtures: Studies of dimethylsulfoxide, ethanol, and tert-butyl alcohol

    NASA Astrophysics Data System (ADS)

    Banerjee, Saikat; Bagchi, Biman

    2013-10-01

    In aqueous binary mixtures, amphiphilic solutes such as dimethylsulfoxide (DMSO), ethanol, tert-butyl alcohol (TBA), etc., are known to form aggregates (or large clusters) at small to intermediate solute concentrations. These aggregates are transient in nature. Although the system remains homogeneous on macroscopic length and time scales, the microheterogeneous aggregation may profoundly affect the properties of the mixture in several distinct ways, particularly if the survival times of the aggregates are longer than density relaxation times of the binary liquid. Here we propose a theoretical scheme to quantify the lifetime and thus the stability of these microheterogeneous clusters, and apply the scheme to calculate the same for water-ethanol, water-DMSO, and water-TBA mixtures. We show that the lifetime of these clusters can range from less than a picosecond (ps) for ethanol clusters to few tens of ps for DMSO and TBA clusters. This helps explaining the absence of a strong composition dependent anomaly in water-ethanol mixtures but the presence of the same in water-DMSO and water-TBA mixtures.

  2. Phantom-GRAPE: Numerical software library to accelerate collisionless N-body simulation with SIMD instruction set on x86 architecture

    NASA Astrophysics Data System (ADS)

    Tanikawa, Ataru; Yoshikawa, Kohji; Nitadori, Keigo; Okamoto, Takashi

    2013-02-01

    We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r>rcut), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×109 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×109 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×109 and 4×109 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments.

  3. Energy efficiency analysis of two-sided feed scheme of DC traction network with high asymmetry of feeders parameters

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Sopov, V. I.

    2017-10-01

    In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.

  4. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  5. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  6. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Hixon, Duane; Sankar, L. N.

    1993-01-01

    During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.

  7. Time-dependent quantum transport: An efficient method based on Liouville-von-Neumann equation for single-electron density matrix

    NASA Astrophysics Data System (ADS)

    Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua

    2012-07-01

    Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.

  8. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  9. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  10. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE PAGES

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    2017-01-12

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  11. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  12. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  13. Notes on the ExactPack Implementation of the DSD Rate Stick Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaul, Ann

    It has been shown above that the discretization scheme implemented in the ExactPack solver for the DSD Rate Stick equation is consistent with the Rate Stick PDE. In addition, a stability analysis has provided a CFL condition for a stable time step. Together, consistency and stability imply convergence of the scheme, which is expected to be close to first-order in time and second-order in space. It is understood that the nonlinearity of the underlying PDE will affect this rate somewhat. In the solver I implemented in ExactPack, I used the one-sided boundary condition described above at the outer boundary. Inmore » addition, I used 80% of the time step calculated in the stability analysis above. By making these two changes, I was able to implement a solver that calculates the solution without any arbitrary limits placed on the values of the curvature at the boundary. Thus, the calculation is driven directly by the conditions at the boundary as formulated in the DSD theory. The chosen scheme is completely coherent and defensible from a mathematical standpoint.« less

  14. Sixth- and eighth-order Hermite integrator for N-body simulations

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro

    2008-10-01

    We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.

  15. Testing time-dependent density functional theory with depopulated molecular orbitals for predicting electronic excitation energies of valence, Rydberg, and charge-transfer states and potential energies near a conical intersection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaohong L.; Truhlar, Donald G., E-mail: truhlar@umn.edu

    2014-09-14

    Kohn-Sham (KS) time-dependent density functional theory (TDDFT) with most exchange-correlation functionals is well known to systematically underestimate the excitation energies of Rydberg and charge-transfer excited states of atomic and molecular systems. To improve the description of Rydberg states within the KS TDDFT framework, Gaiduk et al. [Phys. Rev. Lett. 108, 253005 (2012)] proposed a scheme that may be called HOMO depopulation. In this study, we tested this scheme on an extensive dataset of valence and Rydberg excitation energies of various atoms, ions, and molecules. It is also tested on a charge-transfer excitation of NH{sub 3}-F{sub 2} and on the potentialmore » energy curves of NH{sub 3} near a conical intersection. We found that the method can indeed significantly improve the accuracy of predicted Rydberg excitation energies while preserving reasonable accuracy for valence excitation energies. However, it does not appear to improve the description of charge-transfer excitations that are severely underestimated by standard KS TDDFT with conventional exchange-correlation functionals, nor does it perform appreciably better than standard TDDFT for the calculation of potential energy surfaces.« less

  16. Gas flow calculation method of a ramjet engine

    NASA Astrophysics Data System (ADS)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  17. New optimization scheme to obtain interaction potentials for oxide glasses

    NASA Astrophysics Data System (ADS)

    Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter

    2018-05-01

    We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.

  18. The generalized scheme-independent Crewther relation in QCD

    DOE PAGES

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; ...

    2017-05-10

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Lastly, similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.« less

  19. The generalized scheme-independent Crewther relation in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Lastly, similar scale-fixed commensurate scale relations also connect other physical observables at their physical momentum scales, thus providing convention-independent, fundamental precision tests of QCD.« less

  20. Analytical description of generation of the residual current density in the plasma produced by a few-cycle laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silaev, A. A., E-mail: silaev@appl.sci-nnov.ru; Vvedenskii, N. V., E-mail: vved@appl.sci-nnov.ru; University of Nizhny Novgorod, Nizhny Novgorod 603950

    2015-05-15

    When a gas is ionized by a few-cycle laser pulse, some residual current density (RCD) of free electrons remains in the produced plasma after the passage of the laser pulse. This quasi-dc RCD is an initial impetus to plasma polarization and excitation of the plasma oscillations which can radiate terahertz (THz) waves. In this work, the analytical model for calculation of RCD excited by a few-cycle laser pulse is developed for the first time. The dependences of the RCD on the carrier-envelope phase (CEP), wavelength, duration, and intensity of the laser pulse are derived. It is shown that maximum RCDmore » corresponding to optimal CEP increases with the laser pulse wavelength, which indicates the prospects of using mid-infrared few-cycle laser pulses in the schemes of generation of high-power THz pulses. Analytical formulas for optimal pulse intensity and maximum efficiency of excitation of the RCD are obtained. Basing on numerical solution of the 3D time-dependent Schrödinger equation for hydrogen atoms, RCD dependence on CEP is calculated in a wide range of wavelengths. High accuracy of analytical formulas is demonstrated at the laser pulse parameters which correspond to the tunneling regime of ionization.« less

  1. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  2. Stress and Fracture Analyses Under Elastic-plastic and Creep Conditions: Some Basic Developments and Computational Approaches

    NASA Technical Reports Server (NTRS)

    Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.

  3. Radiation of sound from unflanged cylindrical ducts

    NASA Technical Reports Server (NTRS)

    Hartharan, S. L.; Bayliss, A.

    1983-01-01

    Calculations of sound radiated from unflanged cylindrical ducts are presented. The numerical simulation models the problem of an aero-engine inlet. The time dependent linearized Euler equations are solved from a state of rest until a harmonic solution is attained. A fourth order accurate finite difference scheme is used and solutions are obtained from a fully vectorized Cyber-203 computer program. Cases of both plane waves and spin modes are treated. Spin modes model the sound generated by a turbofan engine. Boundary conditions for both plane waves and spin modes are treated. Solutions obtained are compared with experiments conducted at NASA Langley Research Center.

  4. Low-latency optical parallel adder based on a binary decision diagram with wavelength division multiplexing scheme

    NASA Astrophysics Data System (ADS)

    Shinya, A.; Ishihara, T.; Inoue, K.; Nozaki, K.; Kita, S.; Notomi, M.

    2018-02-01

    We propose an optical parallel adder based on a binary decision diagram that can calculate simply by propagating light through electrically controlled optical pass gates. The CARRY and CARRY operations are multiplexed in one circuit by a wavelength division multiplexing scheme to reduce the number of optical elements, and only a single gate constitutes the critical path for one digit calculation. The processing time reaches picoseconds per digit when we use a 100-μm-long optical path gates, which is ten times faster than a CMOS circuit.

  5. Effect of the scheme of plasmachemical processes on the calculated characteristics of a barrier discharge in xenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avtaeva, S. V.; Kulumbaev, E. B.

    2008-06-15

    The dynamics of a repetitive barrier discharge in xenon at a pressure of 400 Torr is simulated using a one-dimensional drift-diffusion model. The thicknesses of identical barriers with a dielectric constant of 4 are 2 mm, and the gap length is 4 mm. The discharge is fed with an 8-kV ac voltage at a frequency of 25 or 50 kHz. The development of the ionization wave and the breakdown and afterglow phases of a barrier discharge are analyzed using two different kinetic schemes of elementary processes in a xenon plasma. It is shown that the calculated waveforms of the dischargemore » voltage and current, the instant of breakdown, and the number of breakdowns per voltage half-period depend substantially on the properties of the kinetic scheme of plasmachemical processes.« less

  6. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  7. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  8. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A three-dimensional code for viscous cascade flow prediction has been developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four-stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full-multigrid method. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large-scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  9. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  10. Self-match based on polling scheme for passive optical network monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  11. Comparison of cell centered and cell vertex scheme in the calculation of high speed compressible flows

    NASA Astrophysics Data System (ADS)

    Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril

    2012-06-01

    This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.

  12. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  13. 2.5D transient electromagnetic inversion with OCCAM method

    NASA Astrophysics Data System (ADS)

    Li, R.; Hu, X.

    2016-12-01

    In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.

  14. Free energies of binding from large-scale first-principles quantum mechanical calculations: application to ligand hydration energies.

    PubMed

    Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton

    2013-08-15

    Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.

  15. Time domain numerical calculations of unsteady vortical flows about a flat plate airfoil

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Yu, Ping; Scott, J. R.

    1989-01-01

    A time domain numerical scheme is developed to solve for the unsteady flow about a flat plate airfoil due to imposed upstream, small amplitude, transverse velocity perturbations. The governing equation for the resulting unsteady potential is a homogeneous, constant coefficient, convective wave equation. Accurate solution of the problem requires the development of approximate boundary conditions which correctly model the physics of the unsteady flow in the far field. A uniformly valid far field boundary condition is developed, and numerical results are presented using this condition. The stability of the scheme is discussed, and the stability restriction for the scheme is established as a function of the Mach number. Finally, comparisons are made with the frequency domain calculation by Scott and Atassi, and the relative strengths and weaknesses of each approach are assessed.

  16. a Cell Vertex Algorithm for the Incompressible Navier-Stokes Equations on Non-Orthogonal Grids

    NASA Astrophysics Data System (ADS)

    Jessee, J. P.; Fiveland, W. A.

    1996-08-01

    The steady, incompressible Navier-Stokes (N-S) equations are discretized using a cell vertex, finite volume method. Quadrilateral and hexahedral meshes are used to represent two- and three-dimensional geometries respectively. The dependent variables include the Cartesian components of velocity and pressure. Advective fluxes are calculated using bounded, high-resolution schemes with a deferred correction procedure to maintain a compact stencil. This treatment insures bounded, non-oscillatory solutions while maintaining low numerical diffusion. The mass and momentum equations are solved with the projection method on a non-staggered grid. The coupling of the pressure and velocity fields is achieved using the Rhie and Chow interpolation scheme modified to provide solutions independent of time steps or relaxation factors. An algebraic multigrid solver is used for the solution of the implicit, linearized equations.A number of test cases are anlaysed and presented. The standard benchmark cases include a lid-driven cavity, flow through a gradual expansion and laminar flow in a three-dimensional curved duct. Predictions are compared with data, results of other workers and with predictions from a structured, cell-centred, control volume algorithm whenever applicable. Sensitivity of results to the advection differencing scheme is investigated by applying a number of higher-order flux limiters: the MINMOD, MUSCL, OSHER, CLAM and SMART schemes. As expected, studies indicate that higher-order schemes largely mitigate the diffusion effects of first-order schemes but also shown no clear preference among the higher-order schemes themselves with respect to accuracy. The effect of the deferred correction procedure on global convergence is discussed.

  17. An upwind space-time conservation element and solution element scheme for solving dusty gas flow model

    NASA Astrophysics Data System (ADS)

    Rehman, Asad; Ali, Ishtiaq; Qamar, Shamsul

    An upwind space-time conservation element and solution element (CE/SE) scheme is extended to numerically approximate the dusty gas flow model. Unlike central CE/SE schemes, the current method uses the upwind procedure to derive the numerical fluxes through the inner boundary of conservation elements. These upwind fluxes are utilized to calculate the gradients of flow variables. For comparison and validation, the central upwind scheme is also applied to solve the same dusty gas flow model. The suggested upwind CE/SE scheme resolves the contact discontinuities more effectively and preserves the positivity of flow variables in low density flows. Several case studies are considered and the results of upwind CE/SE are compared with the solutions of central upwind scheme. The numerical results show better performance of the upwind CE/SE method as compared to the central upwind scheme.

  18. First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2014-01-01

    A time-dependent extension of the first-order hyperbolic system method for advection-diffusion problems is introduced. Diffusive/viscous terms are written and discretized as a hyperbolic system, which recovers the original equation in the steady state. The resulting scheme offers advantages over traditional schemes: a dramatic simplification in the discretization, high-order accuracy in the solution gradients, and orders-of-magnitude convergence acceleration. The hyperbolic advection-diffusion system is discretized by the second-order upwind residual-distribution scheme in a unified manner, and the system of implicit-residual-equations is solved by Newton's method over every physical time step. The numerical results are presented for linear and nonlinear advection-diffusion problems, demonstrating solutions and gradients produced to the same order of accuracy, with rapid convergence over each physical time step, typically less than five Newton iterations.

  19. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  20. Theoretical investigations of quantum correlations in NMR multiple-pulse spin-locking experiments

    NASA Astrophysics Data System (ADS)

    Gerasev, S. A.; Fedorova, A. V.; Fel'dman, E. B.; Kuznetsova, E. I.

    2018-04-01

    Quantum correlations are investigated theoretically in a two-spin system with the dipole-dipole interactions in the NMR multiple-pulse spin-locking experiments. We consider two schemes of the multiple-pulse spin-locking. The first scheme consists of π /2-pulses only and the delays between the pulses can differ. The second scheme contains φ-pulses (0<φ <π ) and has equal delays between them. We calculate entanglement for both schemes for an initial separable state. We show that entanglement is absent for the first scheme at equal delays between π /2-pulses at arbitrary temperatures. Entanglement emerges after several periods of the pulse sequence in the second scheme at φ =π /4 at milliKelvin temperatures. The necessary number of the periods increases with increasing temperature. We demonstrate the dependence of entanglement on the number of the periods of the multiple-pulse sequence. Quantum discord is obtained for the first scheme of the multiple-pulse spin-locking experiment at different temperatures.

  1. Flowfield-Dependent Mixed Explicit-Implicit (FDMEL) Algorithm for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Garcia, S. M.; Chung, T. J.

    1997-01-01

    Despite significant achievements in computational fluid dynamics, there still remain many fluid flow phenomena not well understood. For example, the prediction of temperature distributions is inaccurate when temperature gradients are high, particularly in shock wave turbulent boundary layer interactions close to the wall. Complexities of fluid flow phenomena include transition to turbulence, relaminarization separated flows, transition between viscous and inviscid incompressible and compressible flows, among others, in all speed regimes. The purpose of this paper is to introduce a new approach, called the Flowfield-Dependent Mixed Explicit-Implicit (FDMEI) method, in an attempt to resolve these difficult issues in Computational Fluid Dynamics (CFD). In this process, a total of six implicitness parameters characteristic of the current flowfield are introduced. They are calculated from the current flowfield or changes of Mach numbers, Reynolds numbers, Peclet numbers, and Damkoehler numbers (if reacting) at each nodal point and time step. This implies that every nodal point or element is provided with different or unique numerical scheme according to their current flowfield situations, whether compressible, incompressible, viscous, inviscid, laminar, turbulent, reacting, or nonreacting. In this procedure, discontinuities or fluctuations of an variables between adjacent nodal points are determined accurately. If these implicitness parameters are fixed to certain numbers instead of being calculated from the flowfield information, then practically all currently available schemes of finite differences or finite elements arise as special cases. Some benchmark problems to be presented in this paper will show the validity, accuracy, and efficiency of the proposed methodology.

  2. Bending and stretching finite element analysis of anisotropic viscoelastic composite plates

    NASA Technical Reports Server (NTRS)

    Hilton, Harry H.; Yi, Sung

    1990-01-01

    Finite element algorithms have been developed to analyze linear anisotropic viscoelastic plates, with or without holes, subjected to mechanical (bending, tension), temperature, and hygrothermal loadings. The analysis is based on Laplace transforms rather than direct time integrations in order to improve the accuracy of the results and save on extensive computational time and storage. The time dependent displacement fields in the transverse direction for the cross ply and angle ply laminates are calculated and the stacking sequence effects of the laminates are discussed in detail. Creep responses for the plates with or without a circular hole are also studied. The numerical results compare favorably with analytical solutions, i.e. within 1.8 percent for bending and 10(exp -3) 3 percent for tension. The tension results of the present method are compared with those using the direct time integration scheme.

  3. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  4. Scalar self-force on eccentric geodesics in Schwarzschild spacetime: A time-domain computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, Roland

    2007-06-15

    We calculate the self-force acting on a particle with scalar charge moving on a generic geodesic around a Schwarzschild black hole. This calculation requires an accurate computation of the retarded scalar field produced by the moving charge; this is done numerically with the help of a fourth-order convergent finite-difference scheme formulated in the time domain. The calculation also requires a regularization procedure, because the retarded field is singular on the particle's world line; this is handled mode-by-mode via the mode-sum regularization scheme first introduced by Barack and Ori. This paper presents the numerical method, various numerical tests, and a samplemore » of results for mildly eccentric orbits as well as ''zoom-whirl'' orbits.« less

  5. Time-dependent multi-dimensional simulation studies of the electron output scheme for high power FELs

    NASA Astrophysics Data System (ADS)

    Hahn, S. J.; Fawley, W. M.; Kim, K.-J.; Edighoffer, J. A.

    1995-04-01

    We examine the performance of the so-called electron output scheme recently proposed by the Novosibirsk group [G.I. Erg et al., 15th Int. Free Electron Laser Conf., The Hague, The Netherlands, 1993, Book of Abstracts p. 50; Preprint Budker INP 93-75]. In this scheme, the key role of the FEL oscillator is to induce bunching, while an external undulator, called the radiator, then outcouples the bunched electron beam to optical energy via coherent emission. The level of the intracavity power in the oscillator is kept low by employing a transverse optical klystron (TOK) configuration, thus avoiding excessive thermal loading on the cavity mirrors. Time-dependent effects are important in the operation of the electron output scheme because high gain in the TOK oscillator leads to sideband instabilities and chaotic behavior. We have carried out an extensive simulation study by using 1D and 2D time-dependent codes and find that proper control of the oscillator cavity detuning and cavity loss results in high output bunching with a narrow spectral bandwidth. Large cavity detuning in the oscillator and tapering of the radiator undulator is necessary for the optimum output power.

  6. Local Fitting of the Kohn-Sham Density in a Gaussian and Plane Waves Scheme for Large-Scale Density Functional Theory Simulations.

    PubMed

    Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg

    2017-05-09

    A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.

  7. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  8. I = 2 ππ scattering phase shift from the HAL QCD method with the LapH smearing

    NASA Astrophysics Data System (ADS)

    Kawai, Daisuke; Aoki, Sinya; Doi, Takumi; Ikeda, Yoichi; Inoue, Takashi; Iritani, Takumi; Ishii, Noriyoshi; Miyamoto, Takaya; Nemura, Hidekatsu; Sasaki, Kenji

    2018-04-01

    Physical observables, such as the scattering phase shifts and binding energy, calculated from the non-local HAL QCD potential do not depend on the sink operators used to define the potential. In practical applications, the derivative expansion of the non-local potential is employed, so that physical observables may receive some scheme dependence at a given order of the expansion. In this paper, we compare the I=2ππ scattering phase shifts obtained in the point-sink scheme (the standard scheme in the HAL QCD method) and the smeared-sink scheme (the LapH smearing newly introduced in the HAL QCD method). Although potentials in different schemes have different forms as expected, we find that, for reasonably small smearing size, the resultant scattering phase shifts agree with each other if the next-to-leading-order (NLO) term is taken into account. We also find that the HAL QCD potential in the point-sink scheme has a negligible NLO term for a wide range of energies, which implies good convergence of the derivative expansion, while the potential in the smeared-sink scheme has a non-negligible NLO contribution. The implications of this observation for future studies of resonance channels (such as the I=0 and 1ππ scatterings) with smeared all-to-all propagators are briefly discussed.

  9. Construction of Three Dimensional Solutions for the Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Yefet, A.; Turkel, E.

    1998-01-01

    We consider numerical solutions for the three dimensional time dependent Maxwell equations. We construct a fourth order accurate compact implicit scheme and compare it to the Yee scheme for free space in a box.

  10. Coupling WRF double-moment 6-class microphysics schemes to RRTMG radiation scheme in weather research forecasting model

    DOE PAGES

    Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny

    2016-01-01

    A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less

  11. Coupling WRF double-moment 6-class microphysics schemes to RRTMG radiation scheme in weather research forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny

    A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less

  12. Coupling WRF Double-Moment 6-Class Microphysics Schemes to RRTMG Radiation Scheme in Weather Research Forecasting Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bae, Soo Ya; Hong, Song-You; Lim, Kyo-Sun Sunny

    A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. A spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less

  13. ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallberg, J.; Stagg, A.

    2000-10-01

    A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less

  14. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  15. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    PubMed

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  16. Multiscale modeling of current-induced switching in magnetic tunnel junctions using ab initio spin-transfer torques

    NASA Astrophysics Data System (ADS)

    Ellis, Matthew O. A.; Stamenova, Maria; Sanvito, Stefano

    2017-12-01

    There exists a significant challenge in developing efficient magnetic tunnel junctions with low write currents for nonvolatile memory devices. With the aim of analyzing potential materials for efficient current-operated magnetic junctions, we have developed a multi-scale methodology combining ab initio calculations of spin-transfer torque with large-scale time-dependent simulations using atomistic spin dynamics. In this work we introduce our multiscale approach, including a discussion on a number of possible schemes for mapping the ab initio spin torques into the spin dynamics. We demonstrate this methodology on a prototype Co/MgO/Co/Cu tunnel junction showing that the spin torques are primarily acting at the interface between the Co free layer and MgO. Using spin dynamics we then calculate the reversal switching times for the free layer and the critical voltages and currents required for such switching. Our work provides an efficient, accurate, and versatile framework for designing novel current-operated magnetic devices, where all the materials details are taken into account.

  17. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  18. Verification of kinetic schemes of hydrogen ignition and combustion in air

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Fedorova, N. N.; Vankova, O. S.; Tropin, D. A.

    2018-03-01

    Three chemical kinetic models for hydrogen combustion in oxygen and three gas-dynamic models for reactive mixture flow behind the initiating SW front were analyzed. The calculated results were compared with experimental data on the dependences of the ignition delay on the temperature and the dilution of the mixture with argon or nitrogen. Based on detailed kinetic mechanisms of nonequilibrium chemical transformations, a mathematical technique for describing the ignition and combustion of hydrogen in air was developed using the ANSYS Fluent code. The problem of ignition of a hydrogen jet fed coaxially into supersonic flow was solved numerically. The calculations were carried out using the Favre-averaged Navier-Stokes equations for a multi-species gas taking into account chemical reactions combined with the k-ω SST turbulence model. The problem was solved in several steps. In the first step, verification of the calculated and experimental data for the three kinetic schemes was performed without considering the conicity of the flow. In the second step, parametric calculations were performed to determine the influence of the conicity of the flow on the mixing and ignition of hydrogen in air using a kinetic scheme consisting of 38 reactions. Three conical supersonic nozzles for a Mach number M = 2 with different expansion angles β = 4°, 4.5°, and 5° were considered.

  19. Benchmarking of calculation schemes in APOLLO2 and COBAYA3 for WER lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheleva, N.; Ivanov, P.; Todorova, G.

    This paper presents solutions of the NURISP WER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2 mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes withmore » the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs. TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of Generalized Equivalence Theory (GET) or Black Box Homogenization (BBH) type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors. (authors)« less

  20. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  1. Density-dependent liquid nitromethane decomposition: molecular dynamics simulations based on ReaxFF.

    PubMed

    Rom, Naomi; Zybin, Sergey V; van Duin, Adri C T; Goddard, William A; Zeiri, Yehuda; Katz, Gil; Kosloff, Ronnie

    2011-09-15

    The decomposition mechanism of hot liquid nitromethane at various compressions was studied using reactive force field (ReaxFF) molecular dynamics simulations. A competition between two different initial thermal decomposition schemes is observed, depending on compression. At low densities, unimolecular C-N bond cleavage is the dominant route, producing CH(3) and NO(2) fragments. As density and pressure rise approaching the Chapman-Jouget detonation conditions (∼30% compression, >2500 K) the dominant mechanism switches to the formation of the CH(3)NO fragment via H-transfer and/or N-O bond rupture. The change in the decomposition mechanism of hot liquid NM leads to a different kinetic and energetic behavior, as well as products distribution. The calculated density dependence of the enthalpy change correlates with the change in initial decomposition reaction mechanism. It can be used as a convenient and useful global parameter for the detection of reaction dynamics. Atomic averaged local diffusion coefficients are shown to be sensitive to the reactions dynamics, and can be used to distinguish between time periods where chemical reactions occur and diffusion-dominated, nonreactive time periods. © 2011 American Chemical Society

  2. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  3. Suzuki-Trotter Formula for Real-Time Dependent LDA I: Electron Dynamics

    NASA Astrophysics Data System (ADS)

    Sugino, Osamu; Miyamoto, Yoshiyuki

    1998-03-01

    To investigate various physical and chemical processes where electron dynamics play a role (e.g. collisions or photochemical reactions), solving the real-time Schrödinger equation is essentially important. ihbar fracpartialφpartial t=H φ Trial of solving eqn. (1) from first principles has begun very recently(K. Yabana and G. F. Bertch, Phys. Rev. B54) 4484 (1996)., and it is now in the stage of establishing efficient, stable, and accurate method for numerical calculation. In this talk, we present several improvements in the method of solving eqn. (1) within the density functional theory: (A) higher order Suzuki-Trotter formula(M. Suzuki, Phys. Lett. A146) 319 (1990). to integrate eqn. (1) keeping the orthonormality of the wavefunctions, (B) special interpolation scheme for the self-consistent potential to reduce the drift in the total-energy, and (C) the preconditioning techniques to increase the time step for the simulation. We will demonstrate numerical stability and efficiency using several cluster calculations, and will address the accuracy by comparing the computed cross sections for atom-electron collisions with experiment.

  4. New Developments in the Method of Space-Time Conservation Element and Solution Element-Applications to Two-Dimensional Time-Marching Problems

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1994-01-01

    A new numerical discretization method for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is motivated by several important physical/numerical considerations and designed to avoid several key limitations of the above traditional methods. As a result of the above considerations, a set of key principles for the design of numerical schemes was put forth in a previous report. These principles were used to construct several numerical schemes that model a 1-D time-dependent convection-diffusion equation. These schemes were then extended to solve the time-dependent Euler and Navier-Stokes equations of a perfect gas. It was shown that the above schemes compared favorably with the traditional schemes in simplicity, generality, and accuracy. In this report, the 2-D versions of the above schemes, except the Navier-Stokes solver, are constructed using the same set of design principles. Their constructions are simplified greatly by the use of a nontraditional space-time mesh. Its use results in the simplest stencil possible, i.e., a tetrahedron in a 3-D space-time with a vertex at the upper time level and other three at the lower time level. Because of the similarity in their design, each of the present 2-D solvers virtually shares with its 1-D counterpart the same fundamental characteristics. Moreover, it is shown that the present Euler solver is capable of generating highly accurate solutions for a famous 2-D shock reflection problem. Specifically, both the incident and the reflected shocks can be resolved by a single data point without the presence of numerical oscillations near the discontinuity.

  5. Analytical excited state forces for the time-dependent density-functional tight-binding method.

    PubMed

    Heringer, D; Niehaus, T A; Wanko, M; Frauenheim, Th

    2007-12-01

    An analytical formulation for the geometrical derivatives of excitation energies within the time-dependent density-functional tight-binding (TD-DFTB) method is presented. The derivation is based on the auxiliary functional approach proposed in [Furche and Ahlrichs, J Chem Phys 2002, 117, 7433]. To validate the quality of the potential energy surfaces provided by the method, adiabatic excitation energies, excited state geometries, and harmonic vibrational frequencies were calculated for a test set of molecules in excited states of different symmetry and multiplicity. According to the results, the TD-DFTB scheme surpasses the performance of configuration interaction singles and the random phase approximation but has a lower quality than ab initio time-dependent density-functional theory. As a consequence of the special form of the approximations made in TD-DFTB, the scaling exponent of the method can be reduced to three, similar to the ground state. The low scaling prefactor and the satisfactory accuracy of the method makes TD-DFTB especially suitable for molecular dynamics simulations of dozens of atoms as well as for the computation of luminescence spectra of systems containing hundreds of atoms. (c) 2007 Wiley Periodicals, Inc.

  6. Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow

    NASA Technical Reports Server (NTRS)

    Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.

    1977-01-01

    An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.

  7. Hawking radiation from the holographic screen

    NASA Astrophysics Data System (ADS)

    Zhao, Ying-Jie

    2017-10-01

    In this paper, we generalize the Parikh-Wilczek scheme to a holographic screen in the framework of the ultraviolet self-complete quantum gravity. We calculate that the tunneling probability depends on the energy of the particle and the mass of the holographic screen. The radiating temperature has not been the standard Hawking temperature.

  8. Intrinsic frame transport for a model of nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Cozzini, S.; Rull, L. F.; Ciccotti, G.; Paolini, G. V.

    1997-02-01

    We present a computer simulation study of the dynamical properties of a nematic liquid crystal model. The diffusional motion of the nematic director is taken into account in our calculations in order to give a proper estimate of the transport coefficients. Differently from other groups we do not attempt to stabilize the director through rigid constraints or applied external fields. We instead define an intrinsic frame which moves along with the director at each step of the simulation. The transport coefficients computed in the intrinsic frame are then compared against the ones calculated in the fixed laboratory frame, to show the inadequacy of the latter for systems with less than 500 molecules. Using this general scheme on the Gay-Berne liquid crystal model, we evidence the natural motion of the director and attempt to quantify its intrinsic time scale and size dependence. Through extended simulations of systems of different size we calculate the diffusion and viscosity coefficients of this model and compare our results with values previously obtained with fixed director.

  9. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  10. The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2008-01-01

    The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.

  11. Use of corrected centrifugal sudden approximations for the calculation of effective cross sections. II. The N sub 2 --He system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thachuk, M.; McCourt, F.R.W.

    1991-09-15

    A series of centrifugal sudden (CS) and infinite-order sudden (IOS) approximations together with their corrected versions, respectively, the corrected centrifugal sudden (CCS) and corrected infinite-order sudden (CIOS) approximations, originally introduced by McLenithan and Secrest (J. Chem. Phys. {bold 80}, 2480 (1987)), have been compared with the close-coupled (CC) method for the N{sub 2}--He interaction. This extends previous work using the H{sub 2}--He system (J. Chem. Phys. {bold 93}, 3931 (1990)) to an interaction which is more anisotropic and more classical in nature. A set of eleven energy dependent cross sections, including both relaxation and production types, has been calculated usingmore » the {ital LF}- and {ital LA}-labeling schemes for the CS approximation, as well as the {ital KI}-, {ital KF}-, {ital KA}-, and {ital KM}-labeling schemes for the IOS approximation. The latter scheme is defined as {ital KM}={ital K}=max({ital k}{sub {ital j}},{ital k}{sub {ital j}{sub {ital I}}}). Further, a number of temperature dependent cross sections formed from thermal averages of the above set have also been compared at 100 and 200 K. These comparisons have shown that the CS approximation produced accurate results for relaxation type cross sections regardless of the {ital L}-labeling scheme chosen, but inaccurate results for production type cross sections. Further, except for one particular cross section, the CCS approximation did not generally improve the accuracy of the CS results using either the {ital LF}- or {ital LA}-labeling schemes. The accuracy of the IOS results vary greatly between the cross sections with the most accurate values given by the {ital KM}-labeling scheme. The CIOS approximation generally increases the accuracy of the corresponding IOS results but does not completely eliminate the errors associated with them.« less

  12. Quark-mass dependence of two-nucleon observables

    NASA Astrophysics Data System (ADS)

    Chen, Jiunn-Wei; Lee, Tze-Kei; Liu, C.-P.; Liu, Yu-Sheng

    2012-11-01

    We study the potential implications of lattice QCD determinations of the S-wave nucleon-nucleon scattering lengths with unphysical light quark masses. If the light quark masses are small enough such that nuclear effective field theory (NEFT) can be used to perform quark-mass extrapolations, then the leading quark-mass dependence of not only the effective range and the two-body current, but also all the low-energy deuteron matrix elements up to next-to-leading-order in NEFT can be obtained. As a proof of principle, we compute the quark-mass dependence of the deuteron charge radius, magnetic moment, polarizability, and the deuteron photodisintegration cross section using the lattice calculation of the scattering lengths at 354 MeV pion mass by the ``Nuclear Physics with Lattice QCD'' (NPLQCD) collaboration and the NEFT power counting scheme of Beane, Kaplan, and Vuorinen (BKV), even though it is not yet established that the 354 MeV pion mass is within the radius of convergence of the BKV scheme. Once the lattice result with quark mass within the NEFT radius of convergence is obtained, our observation can be used to constrain the time variation of isoscalar combination of u and d quark mass mq, to help the anthropic principle study to find the mq range that allows the existence of life, and to provide a weak test of the multiverse conjecture.

  13. Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1992-01-01

    Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.

  14. The finite element model for the propagation of light in scattering media: a direct method for domains with nonscattering regions.

    PubMed

    Arridge, S R; Dehghani, H; Schweiger, M; Okada, E

    2000-01-01

    We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.

  15. Data distribution method of workflow in the cloud environment

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.

  16. Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakai, K.; Sun, J.G.; Sha, W.T.

    Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less

  17. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    NASA Technical Reports Server (NTRS)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  18. Should One Use the Ray-by-Ray Approximation in Core-Collapse Supernova Simulations?

    DOE PAGES

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.

    2016-10-28

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12-, 15-, 20-, and 25-M⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+more » approach. Employing it leads to maximum post-bounce/preexplosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25-M⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less

  19. Should One Use the Ray-by-Ray Approximation in Core-collapse Supernova Simulations?

    NASA Astrophysics Data System (ADS)

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.

    2016-11-01

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M ⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M ⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.

  20. Spectrum efficient distance-adaptive paths for fixed and fixed-alternate routing in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Bhatia, Vimal; Prakash, Shashi

    2018-01-01

    Efficient utilization of spectrum is a key concern in the soon to be deployed elastic optical networks (EONs). To perform routing in EONs, various fixed routing (FR), and fixed-alternate routing (FAR) schemes are ubiquitously used. FR, and FAR schemes calculate a fixed route, and a prioritized list of a number of alternate routes, respectively, between different pairs of origin o and target t nodes in the network. The route calculation performed using FR and FAR schemes is predominantly based on either the physical distance, known as k -shortest paths (KSP), or on the hop count (HC). For survivable optical networks, FAR usually calculates link-disjoint (LD) paths. These conventional routing schemes have been efficiently used for decades in communication networks. However, in this paper, it has been demonstrated that these commonly used routing schemes cannot utilize the network spectral resources optimally in the newly introduced EONs. Thus, we propose a new routing scheme for EON, namely, k -distance adaptive paths (KDAP) that efficiently utilizes the benefit of distance-adaptive modulation, and bit rate-adaptive superchannel capability inherited by EON to improve spectrum utilization. In the proposed KDAP, routes are found and prioritized on the basis of bit rate, distance, spectrum granularity, and the number of links used for a particular route. To evaluate the performance of KSP, HC, LD, and the proposed KDAP, simulations have been performed for three different sized networks, namely, 7-node test network (TEST7), NSFNET, and 24-node US backbone network (UBN24). We comprehensively assess the performance of various conventional, and the proposed routing schemes by solving both the RSA and the dual RSA problems under homogeneous and heterogeneous traffic requirements. Simulation results demonstrate that there is a variation amongst the performance of KSP, HC, and LD, depending on the o - t pair, and the network topology and its connectivity. However, the proposed KDAP always performs better for all the considered networks and traffic scenarios, as compared to the conventional routing schemes, namely, KSP, HC, and LD. The proposed KDAP achieves up to 60 % , and 10.46 % improvement in terms of spectrum utilization, and resource utilization ratio, respectively, over the conventional routing schemes.

  1. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  2. Numerical solutions of Navier-Stokes equations for compressible turbulent two/three dimensional flows in terminal shock region of an inlet/diffuser

    NASA Technical Reports Server (NTRS)

    Liu, N. S.; Shamroth, S. J.; Mcdonald, H.

    1983-01-01

    The multidimensional ensemble averaged compressible time dependent Navier Stokes equations in conjunction with mixing length turbulence model and shock capturing technique were used to study the terminal shock type of flows in various flight regimes occurring in a diffuser/inlet model. The numerical scheme for solving the governing equations is based on a linearized block implicit approach and the following high Reynolds number calculations were carried out: (1) 2 D, steady, subsonic; (2) 2 D, steady, transonic with normal shock; (3) 2 D, steady, supersonic with terminal shock; (4) 2 D, transient process of shock development and (5) 3 D, steady, transonic with normal shock. The numerical results obtained for the 2 D and 3 D transonic shocked flows were compared with corresponding experimental data; the calculated wall static pressure distributions agree well with the measured data.

  3. Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem

    NASA Astrophysics Data System (ADS)

    Auteri, F.; Quartapelle, L.; Vigevano, L.

    2002-08-01

    This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.

  4. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruberti, M.; Averbukh, V.; Decleva, P.

    2014-10-28

    We present the first implementation of the ab initio many-body Green's function method, algebraic diagrammatic construction (ADC), in the B-spline single-electron basis. B-spline versions of the first order [ADC(1)] and second order [ADC(2)] schemes for the polarization propagator are developed and applied to the ab initio calculation of static (photoionization cross-sections) and dynamic (high-order harmonic generation spectra) quantities. We show that the cross-section features that pose a challenge for the Gaussian basis calculations, such as Cooper minima and high-energy tails, are found to be reproduced by the B-spline ADC in a very good agreement with the experiment. We also presentmore » the first dynamic B-spline ADC results, showing that the effect of the Cooper minimum on the high-order harmonic generation spectrum of Ar is correctly predicted by the time-dependent ADC calculation in the B-spline basis. The present development paves the way for the application of the B-spline ADC to both energy- and time-resolved theoretical studies of many-electron phenomena in atoms, molecules, and clusters.« less

  5. A conservative scheme for electromagnetic simulation of magnetized plasmas with kinetic electrons

    NASA Astrophysics Data System (ADS)

    Bao, J.; Lin, Z.; Lu, Z. X.

    2018-02-01

    A conservative scheme has been formulated and verified for gyrokinetic particle simulations of electromagnetic waves and instabilities in magnetized plasmas. An electron continuity equation derived from the drift kinetic equation is used to time advance the electron density perturbation by using the perturbed mechanical flow calculated from the parallel vector potential, and the parallel vector potential is solved by using the perturbed canonical flow from the perturbed distribution function. In gyrokinetic particle simulations using this new scheme, the shear Alfvén wave dispersion relation in the shearless slab and continuum damping in the sheared cylinder have been recovered. The new scheme overcomes the stringent requirement in the conventional perturbative simulation method that perpendicular grid size needs to be as small as electron collisionless skin depth even for the long wavelength Alfvén waves. The new scheme also avoids the problem in the conventional method that an unphysically large parallel electric field arises due to the inconsistency between electrostatic potential calculated from the perturbed density and vector potential calculated from the perturbed canonical flow. Finally, the gyrokinetic particle simulations of the Alfvén waves in sheared cylinder have superior numerical properties compared with the fluid simulations, which suffer from numerical difficulties associated with singular mode structures.

  6. Newly-Developed 3D GRMHD Code and its Application to Jet Formation

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated constrained transport scheme is used to maintain a divergence-free magnetic field. We have performed various 1-dimensional test problems in both special and general relativity by using several reconstruction methods and found that the new 3D GRMHD code shows substantial improvements over our previous model. The . preliminary results show the jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. We will discuss the jet properties depended on the rotation of a black hole and the magnetic field strength.

  7. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Xia, Yidong; Luo, Hong

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  8. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE PAGES

    Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...

    2016-10-05

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  9. Comparison in Schemes for Simulating Depositional Growth of Ice Crystal between Theoretical and Laboratory Data

    NASA Astrophysics Data System (ADS)

    Zhai, Guoqing; Li, Xiaofan

    2015-04-01

    The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.

  10. Finite elements and finite differences for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Murman, E. M.; Wellford, L. C.

    1978-01-01

    The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.

  11. ADER schemes for scalar non-linear hyperbolic conservation laws with source terms in three-space dimensions

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2005-01-01

    In this paper we develop non-linear ADER schemes for time-dependent scalar linear and non-linear conservation laws in one-, two- and three-space dimensions. Numerical results of schemes of up to fifth order of accuracy in both time and space illustrate that the designed order of accuracy is achieved in all space dimensions for a fixed Courant number and essentially non-oscillatory results are obtained for solutions with discontinuities. We also present preliminary results for two-dimensional non-linear systems.

  12. Conjugate-gradient optimization method for orbital-free density functional calculations.

    PubMed

    Jiang, Hong; Yang, Weitao

    2004-08-01

    Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.

  13. A biomechanical model for actively controlled snow ski bindings.

    PubMed

    Hull, M L; Ramming, J E

    1980-11-01

    Active control of snow ski bindings is a new design concept which potentially offers improved protection from lower extremity injury. Implementation of this concept entails measuring physical variables and calculating loading and/or deformation in injury prone musculoskeletal components. The subject of this paper is definition of a biomechanical model for calculating tibia torsion based on measurements of torsion loading between the boot and ski. Previous control schemes have used leg displacement only to indicate tibia torsion. The contributions of both inertial and velocity-dependent torques to tibia loading are explored and it is shown that both these moments must be included in addition to displacement-dependent moments. A new analog controller design which includes inertia, damping, and stiffness terms in the tibia load calculation is also presented.

  14. Random walk in degree space and the time-dependent Watts-Strogatz model

    NASA Astrophysics Data System (ADS)

    Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  15. Random walk in degree space and the time-dependent Watts-Strogatz model.

    PubMed

    Casa Grande, H L; Cotacallapa, M; Hase, M O

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  16. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  17. Higher-order accurate space-time schemes for computational astrophysics—Part I: finite volume methods

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2017-12-01

    As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

  18. Means and method of sampling flow related variables from a waterway in an accurate manner using a programmable calculator

    Treesearch

    Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin

    1987-01-01

    Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...

  19. On the Interaction Between Gravity Waves and Atmospheric Thermal Tides

    NASA Astrophysics Data System (ADS)

    Agner, Ryan Matthew

    Gravity waves and thermal tides are two of the most important dynamical features of the atmosphere. They are both generated in the lower atmosphere and propagate upward transporting energy and momentum to the upper atmosphere. This dissertation focuses on the interaction of these waves in the Mesosphere and Lower Thermosphere (MLT) region of the atmosphere using both observational data and Global Circulation Model (GCMs). The first part of this work focuses on observations of gravity wave interactions with the tides using both LIDAR data at the Star Fire Optical Range (SOR, 35?N, 106.5?W) and a meteor radar data at the Andes LIDAR Observatory (ALO, 30.3?S, 70.7?W). At SOR, the gravity waves are shown to enhance or damp the amplitude of the diurnal variations dependent on altitude while the phase is always delayed. The results compare well with previous mechanistic model results and with the Japanese Atmospheric General circulation model for Upper Atmosphere Research (JAGUAR) high resolution global circulation model. The meteor radar observed the GWs to almost always enhance the tidal amplitudes and either delay or advance the phase depending on the altitude. When compared to previous radar results from the same meteor radar when it was located in Maui, Hawaii, the Chile results are very similar while the LIDAR results show significant differences. This is because of several instrument biases when calculating GW momentum fluxes that is not significant when determining the winds. The radar needs to perform large amounts of all-sky averaging across many weeks, while the LIDAR directly detects waves in a small section of sky. The second part of this work focuses on gravity wave parameterization scheme effects on the tides in GCMs. The Specified Dynamics Whole Atmosphere Community Climate Model (SD-WACCM) and the extended Canadian Middle Atmosphere Model (eCMAM) are used for this analysis. The gravity wave parameterization schemes in the eCMAM (Hines scheme) have been shown to enhance the tidal amplitudes compared to observations while the parameterization scheme in SD-WACCM (Lindzen scheme) overdamps the tides. It is shown here that the Hines scheme assumption that only small scale gravity waves force the atmosphere do not create enough drag to properly constrain the tidal amplitudes. The Lindzen scheme produces too much drag because all wave scales are assumed to be saturated thus continuing to provide forcing on the atmosphere above the breaking altitude. The final part of this work investigates GWs, tides and their interactions on a local time scale instead of a global scale in the two GCMs. The local time GWs in eCMAM are found to have a strong seasonal dependence, with the majority of the forcings at the winter pole at latitudes where the diurnal variations are weak limiting their interactions. In SD-WACCM, the largest local GW forcings are located at mid latitudes near where the diurnal variations peak causing them to dampen the diurnal amplitudes. On a local time level the diurnal variations may be a summation of many tidal modes. The analysis reveals that in eCMAM the DW1 tidal mode is by far the dominant mode accounting for the local time variations. The high amount of modulation of GWs by the DW1 tidal winds does not allow it to be properly constrained, causing it to dominate the local time diurnal variations. Similarly, the DW1 projection of GW forcing is dominant over all other other modes and contributes the most to the local time diurnal GW variations. The local time wind variations in SD-WACCM are in uenced by several tidal modes because the DW1 tide is of compatible amplitudes to other modes. This is because of the increased damping on the tide by the GWs. It is also found that the local GW diurnal variations have significant contributions from all tidal modes due to the time and location of the forcing being dependent only on the tropospheric source regions and not the at altitude tidal winds.

  20. Semi-empirical anzatz for Helmholtz free energy calculation: Thermal properties of silver along shock Hugoniot

    NASA Astrophysics Data System (ADS)

    Joshi, R. H.; Thakore, B. Y.; Bhatt, N. K.; Vyas, P. R.; Jani, A. R.

    2018-02-01

    A density functional theory along with electronic contribution is used to compute quasiharmonic total energy for silver, whereas explicit phonon anharmonic contribution is added through perturbative term in temperature. Within the Mie-Grüneisen approach, we propose a consistent computational scheme for calculating various thermophysical properties of a substance, in which the required Grüneisen parameter γth is calculated from the knowledge of binding energy. The present study demonstrates that no separate relation for volume dependence for γth is needed, and complete thermodynamics under simultaneous high-temperature and high-pressure condition can be derived in a consistent manner. We have calculated static and dynamic equation of states and some important thermodynamic properties along the shock Hugoniot. A careful examination of temperature dependence of Grüneisen parameter reveals the importance of temperature-effect on various thermal properties.

  1. Quench of non-Markovian coherence in the deep sub-Ohmic spin–boson model: A unitary equilibration scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Yao, E-mail: yaoyao@fudan.edu.cn

    The deep sub-Ohmic spin–boson model shows a longstanding non-Markovian coherence at low temperature. Motivating to quench this robust coherence, the thermal effect is unitarily incorporated into the time evolution of the model, which is calculated by the adaptive time-dependent density matrix renormalization group algorithm combined with the orthogonal polynomials theory. Via introducing a unitary heating operator to the bosonic bath, the bath is heated up so that a majority portion of the bosonic excited states is occupied. It is found in this situation the coherence of the spin is quickly quenched even in the coherent regime, in which the non-Markovianmore » feature dominates. With this finding we come up with a novel way to implement the unitary equilibration, the essential term of the eigenstate-thermalization hypothesis, through a short-time evolution of the model.« less

  2. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  3. Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes

    NASA Astrophysics Data System (ADS)

    Goncharov, K. A.; Denisov, I. A.

    2018-03-01

    The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.

  4. Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation

    PubMed Central

    Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton

    2016-01-01

    Abstract A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model‐dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model‐dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low‐level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales. PMID:27668040

  5. Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation

    NASA Astrophysics Data System (ADS)

    Sandu, Irina; Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton

    2016-03-01

    A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model-dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model-dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low-level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales.

  6. Galilean invariant resummation schemes of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peloso, Marco; Pietroni, Massimo, E-mail: peloso@physics.umn.edu, E-mail: massimo.pietroni@unipr.it

    2017-01-01

    Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them inmore » the so called Time-Flow, or TRG, equations.« less

  7. A multi-state fragment charge difference approach for diabatic states in electron transfer: Extension and automation

    NASA Astrophysics Data System (ADS)

    Yang, Chou-Hsun; Hsu, Chao-Ping

    2013-10-01

    The electron transfer (ET) rate prediction requires the electronic coupling values. The Generalized Mulliken-Hush (GMH) and Fragment Charge Difference (FCD) schemes have been useful approaches to calculate ET coupling from an excited state calculation. In their typical form, both methods use two eigenstates in forming the target charge-localized diabatic states. For problems involve three or four states, a direct generalization is possible, but it is necessary to pick and assign the locally excited or charge-transfer states involved. In this work, we generalize the 3-state scheme for a multi-state FCD without the need of manual pick or assignment for the states. In this scheme, the diabatic states are obtained separately in the charge-transfer or neutral excited subspaces, defined by their eigenvalues in the fragment charge-difference matrix. In each subspace, the Hamiltonians are diagonalized, and there exist off-diagonal Hamiltonian matrix elements between different subspaces, particularly the charge-transfer and neutral excited diabatic states. The ET coupling values are obtained as the corresponding off-diagonal Hamiltonian matrix elements. A similar multi-state GMH scheme can also be developed. We test the new multi-state schemes for the performance in systems that have been studied using more than two states with FCD or GMH. We found that the multi-state approach yields much better charge-localized states in these systems. We further test for the dependence on the number of state included in the calculation of ET couplings. The final coupling values are converged when the number of state included is increased. In one system where experimental value is available, the multi-state FCD coupling value agrees better with the previous experimental result. We found that the multi-state GMH and FCD are useful when the original two-state approach fails.

  8. Assessment of numerical techniques for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1989-01-01

    The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.

  9. Technical Note: Adjoint formulation of the TOMCAT atmospheric transport scheme in the Eulerian backtracking framework (RETRO-TOM)

    NASA Astrophysics Data System (ADS)

    Haines, P. E.; Esler, J. G.; Carver, G. D.

    2014-06-01

    A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.

  10. Technical Note: Adjoint formulation of the TOMCAT atmospheric transport scheme in the Eulerian backtracking framework (RETRO-TOM)

    NASA Astrophysics Data System (ADS)

    Haines, P. E.; Esler, J. G.; Carver, G. D.

    2014-01-01

    A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments), to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time-symmetric suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux-limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.

  11. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  12. Parallel Computation and Visualization of Three-dimensional, Time-dependent, Thermal Convective Flows

    NASA Technical Reports Server (NTRS)

    Wang, P.; Li, P.

    1998-01-01

    A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.

  13. Unifying time evolution and optimization with matrix product states

    NASA Astrophysics Data System (ADS)

    Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank

    2016-10-01

    We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.

  14. Classifying and quantifying basins of attraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprott, J. C.; Xiong, Anda

    2015-08-15

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions.

  15. The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects

    NASA Astrophysics Data System (ADS)

    Trinh, V. B.; Zhong, Y.; Osipov, S. P.

    2017-04-01

    . The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.

  16. Real-time adaptive finite element solution of time-dependent Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Liu, Di

    2015-01-01

    In our previous paper (Bao et al., 2012 [1]), a general framework of using adaptive finite element methods to solve the Kohn-Sham equation has been presented. This work is concerned with solving the time-dependent Kohn-Sham equations. The numerical methods are studied in the time domain, which can be employed to explain both the linear and the nonlinear effects. A Crank-Nicolson scheme and linear finite element space are employed for the temporal and spatial discretizations, respectively. To resolve the trouble regions in the time-dependent simulations, a heuristic error indicator is introduced for the mesh adaptive methods. An algebraic multigrid solver is developed to efficiently solve the complex-valued system derived from the semi-implicit scheme. A mask function is employed to remove or reduce the boundary reflection of the wavefunction. The effectiveness of our method is verified by numerical simulations for both linear and nonlinear phenomena, in which the effectiveness of the mesh adaptive methods is clearly demonstrated.

  17. Algorithms for elasto-plastic-creep postbuckling

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1984-01-01

    This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.

  18. LPTA: Location Predictive and Time Adaptive Data Gathering Scheme with Mobile Sink for Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327

  19. Event-triggered attitude control of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Baolin; Shen, Qiang; Cao, Xibin

    2018-02-01

    The problem of spacecraft attitude stabilization control system with limited communication and external disturbances is investigated based on an event-triggered control scheme. In the proposed scheme, information of attitude and control torque only need to be transmitted at some discrete triggered times when a defined measurement error exceeds a state-dependent threshold. The proposed control scheme not only guarantees that spacecraft attitude control errors converge toward a small invariant set containing the origin, but also ensures that there is no accumulation of triggering instants. The performance of the proposed control scheme is demonstrated through numerical simulation.

  20. Lagrangian descriptors in dissipative systems.

    PubMed

    Junginger, Andrej; Hernandez, Rigoberto

    2016-11-09

    The reaction dynamics of time-dependent systems can be resolved through a recrossing-free dividing surface associated with the transition state trajectory-that is, the unique trajectory which is bound to the barrier region for all time in response to a given time-dependent potential. A general procedure based on the minimization of Lagrangian descriptors has recently been developed by Craven and Hernandez [Phys. Rev. Lett., 2015, 115, 148301] to construct this particular trajectory without requiring perturbative expansions relative to the naive transition state point at the top of the barrier. The extension of the method to account for dissipation in the equations of motion requires additional considerations established in this paper because the calculation of the Lagrangian descriptor involves the integration of trajectories in forward and backward time. The two contributions are in general very different because the friction term can act as a source (in backward time) or sink (in forward time) of energy, leading to the possibility that information about the phase space structure may be lost due to the dominance of only one of the terms. To compensate for this effect, we introduce a weighting scheme within the Lagrangian descriptor and demonstrate that for thermal Langevin dynamics it preserves the essential phase space structures, while they are lost in the nonweighted case.

  1. The Function of Credit Scheme to Improve Family Income among Beef Cattle Farmers in Central Java Province

    NASA Astrophysics Data System (ADS)

    Prasetyo, E.; Ekowati, T.; Roessali, W.; Gayatri, S.

    2018-02-01

    The aims of study were: (i) identify of beef cattle fattening credit scheme, (ii) calculating and analyze of beef cattle farmers’ income, (iii) analyze of factors influencing beef cattle credit scheme towards farmer’s income. The research was held in five regencies in Central Java Province. Beef cattle fattening farm was standardized as an elementary unit. Survey method was used, while Two Stage Cluster Purposive Sampling was used for determining of sample. Data were analyzed using statistical method of quantitative descriptive and inferential statistics in term of income analysis and multiple linear regression models. The result showed that farmers used their own capital to run the farm. The average amount was IDR 10,769,871. Kredit Ketahanan Pangan dan Energi was credit scheme which was dominantly access by farmers. The average credit was IDR 23,312,200/farmer with rate of credit equal to 6.46%, the time of credit returning equal to 24.60 monthand the prediction of average collateral equal to IDR 35,800,00. The average of farmers’ income was IDR 4,361,611.60/2.96 head of beef cattle/fattening period. If the labour cost did not calculate as a cost production, hence the farmer’ income was IDR 7,608,630.41 or in other word the farmer’ income increase 74.44%. Factors of credit scheme which partially significant influence to the farmers’ income were number of own capital usage and value of credit collateral. Meanwhile, name of credit scheme, financing institution as a creditor, amount of credit, rate of credit scheme and time of returning credit were not significantly influence towards farmers’ income.

  2. Hamiltonian adaptive resolution molecular dynamics simulation of infrared dielectric functions of liquids

    NASA Astrophysics Data System (ADS)

    Wang, C. C.; Tan, J. Y.; Liu, L. H.

    2018-05-01

    Hamiltonian adaptive resolution scheme (H-AdResS), which allows to simulate materials by treating different domains of the system at different levels of resolution, is a recently proposed atomistic/coarse-grained multiscale model. In this work, a scheme to calculate the dielectric functions of liquids on account of H-AdResS is presented. In the proposed H-AdResS dielectric-function calculation scheme (DielectFunctCalS), the corrected molecular dipole moments are calculated by multiplying molecular dipole moment by the weighting fraction of the molecular mapping point. As the widths of all-atom and hybrid regions show different degrees of influence on the dielectric functions, a prefactor is multiplied to eliminate the effects of all-atom and hybrid region widths. Since one goal of using the H-AdResS method is to reduce computational costs, widths of the all-atom region and the hybrid region can be reduced considering that the coarse-grained simulation is much more timesaving compared to atomistic simulation. Liquid water and ethanol are taken as test cases to validate the DielectFunctCalS. The H-AdResS DielectFunctCalS results are in good agreement with all-atom molecular dynamics simulations. The accuracy of the H-AdResS results, together with all-atom molecular dynamics results, depends heavily on the choice of the force field and force field parameters. The H-AdResS DielectFunctCalS allows us to calculate the dielectric functions of macromolecule systems with high efficiency and makes the dielectric function calculations of large biomolecular systems possible.

  3. Hospital financing: calculating inpatient capital costs in Germany with a comparative view on operating costs and the English costing scheme.

    PubMed

    Vogl, Matthias

    2014-04-01

    The paper analyzes the German inpatient capital costing scheme by assessing its cost module calculation. The costing scheme represents the first separated national calculation of performance-oriented capital cost lump sums per DRG. The three steps in the costing scheme are reviewed and assessed: (1) accrual of capital costs; (2) cost-center and cost category accounting; (3) data processing for capital cost modules. The assessment of each step is based on its level of transparency and efficiency. A comparative view on operating costing and the English costing scheme is given. Advantages of the scheme are low participation hurdles, low calculation effort for G-DRG calculation participants, highly differentiated cost-center/cost category separation, and advanced patient-based resource allocation. The exclusion of relevant capital costs, nontransparent resource allocation, and unclear capital cost modules, limit the managerial relevance and transparency of the capital costing scheme. The scheme generates the technical premises for a change from dual financing by insurances (operating costs) and state (capital costs) to a single financing source. The new capital costing scheme will intensify the discussion on how to solve the current investment backlog in Germany and can assist regulators in other countries with the introduction of accurate capital costing. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  5. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  6. A CellML simulation compiler and code generator using ODE solving schemes

    PubMed Central

    2012-01-01

    Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065

  7. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  8. Turn Your Key--Reducing Truck Idling

    ERIC Educational Resources Information Center

    MacRae, Gareth; Stockport, Tina

    2008-01-01

    As Australia enters the era of emissions trading schemes, strategies to further curb emissions will grow in importance. At the same time, a national emissions trading scheme is set to be introduced whilst the country is set to increase its dependency and volume of road transport in years to come. This raises a doubly important question for…

  9. SHOULD ONE USE THE RAY-BY-RAY APPROXIMATION IN CORE-COLLAPSE SUPERNOVA SIMULATIONS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C., E-mail: burrows@astro.princeton.edu, E-mail: askinner@astro.princeton.edu, E-mail: jdolence@lanl.gov

    2016-11-01

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M {sub ⊙} progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive usemore » of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M {sub ⊙} progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less

  10. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR.

    PubMed

    Takegoshi, K; Miyazawa, Norihiro; Sharma, Kshama; Madhu, P K

    2015-04-07

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes with the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.

  11. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR

    NASA Astrophysics Data System (ADS)

    Takegoshi, K.; Miyazawa, Norihiro; Sharma, Kshama; Madhu, P. K.

    2015-04-01

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes with the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.

  12. Comparison among Magnus/Floquet/Fer expansion schemes in solid-state NMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takegoshi, K., E-mail: takeyan@kuchem.kyoto-u.ac.jp; Miyazawa, Norihiro; Sharma, Kshama

    2015-04-07

    We here revisit expansion schemes used in nuclear magnetic resonance (NMR) for the calculation of effective Hamiltonians and propagators, namely, Magnus, Floquet, and Fer expansions. While all the expansion schemes are powerful methods there are subtle differences among them. To understand the differences, we performed explicit calculation for heteronuclear dipolar decoupling, cross-polarization, and rotary-resonance experiments in solid-state NMR. As the propagator from the Fer expansion takes the form of a product of sub-propagators, it enables us to appreciate effects of time-evolution under Hamiltonians with different orders separately. While 0th-order average Hamiltonian is the same for the three expansion schemes withmore » the three cases examined, there is a case that the 2nd-order term for the Magnus/Floquet expansion is different from that obtained with the Fer expansion. The difference arises due to the separation of the 0th-order term in the Fer expansion. The separation enables us to appreciate time-evolution under the 0th-order average Hamiltonian, however, for that purpose, we use a so-called left-running Fer expansion. Comparison between the left-running Fer expansion and the Magnus expansion indicates that the sign of the odd orders in Magnus may better be reversed if one would like to consider its effect in order.« less

  13. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  14. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems, task 1: Ducted propfan analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1990-01-01

    The time-dependent three-dimensional Euler equations of gas dynamics were solved numerically to study the steady compressible transonic flow about ducted propfan propulsion systems. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. An implicit residual smoothing operator was used to aid convergence. Two calculation grids were employed in this study. The first grid utilized an H-type mesh network with a branch cut opening to represent the axisymmetric cowl. The second grid utilized a multiple-block mesh system with a C-type grid about the cowl. The individual blocks were numerically coupled in the Euler solver. Grid systems were generated by a combined algebraic/elliptic algortihm developed specifically for ducted propfans. Numerical calculations were initially performed for unducted propfans to verify the accuracy of the three-dimensional Euler formulation. The Euler analyses were then applied for the calculation of ducted propfan flows, and predicted results were compared with experimental data for two cases. The three-dimensional Euler analyses displayed exceptional accuracy, although certain parameters were observed to be very sensitive to geometric deflections. Both solution schemes were found to be very robust and demonstrated nearly equal efficiency and accuracy, although it was observed that the multi-block C-grid formulation provided somewhat better resolution of the cowl leading edge region.

  15. On a phase field approach for martensitic transformations in a crystal plastic material at a loaded surface

    NASA Astrophysics Data System (ADS)

    Schmitt, Regina; Kuhn, Charlotte; Müller, Ralf

    2017-07-01

    A continuum phase field model for martensitic transformations is introduced, including crystal plasticity with different slip systems for the different phases. In a 2D setting, the transformation-induced eigenstrain is taken into account for two martensitic orientation variants. With aid of the model, the phase transition and its dependence on the volume change, crystal plastic material behavior, and the inheritance of plastic deformations from austenite to martensite are studied in detail. The numerical setup is motivated by the process of cryogenic turning. The resulting microstructure qualitatively coincides with an experimentally obtained martensite structure. For the numerical calculations, finite elements together with global and local implicit time integration scheme are employed.

  16. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    NASA Astrophysics Data System (ADS)

    Brandt, C.; Thakur, S. C.; Tynan, G. R.

    2016-04-01

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.

  17. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  18. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  19. Performance analyses and improvements for the IEEE 802.15.4 CSMA/CA scheme with heterogeneous buffered conditions.

    PubMed

    Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng

    2012-01-01

    Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes-OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)-are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results.

  20. Optimization research of railway passenger transfer scheme based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Xiang

    2018-05-01

    The optimization research of railway passenger transfer scheme can provide strong support for railway passenger transport system, and its essence is path search. This paper realized the calculation of passenger transfer scheme for high speed railway when giving the time and stations of departure and arrival. The specific method that used were generating a passenger transfer service network of high-speed railway, establishing optimization model and searching by Ant Colony Algorithm. Finally, making analysis on the scheme from LanZhouxi to BeiJingXi which were based on high-speed railway network of China in 2017. The results showed that the transfer network and model had relatively high practical value and operation efficiency.

  1. XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar

    2017-04-01

    Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions (Chevallier et al. 2007).

  2. Donor acceptor electronic couplings in π-stacks: How many states must be accounted for?

    NASA Astrophysics Data System (ADS)

    Voityuk, Alexander A.

    2006-04-01

    Two-state model is commonly used to estimate the donor-acceptor electronic coupling Vda for electron transfer. However, in some important cases, e.g. for DNA π-stacks, this scheme fails to provide accurate values of Vda because of multistate effects. The Generalized Mulliken-Hush method enables a multistate treatment of Vda. In this Letter, we analyze the dependence of calculated electronic couplings on the number of the adiabatic states included in the model. We suggest a simple scheme to determine this number. The superexchange correction of the two-state approximation is shown to provide good estimates of the electronic coupling.

  3. A photonic transistor device based on photons and phonons in a cavity electromechanical system

    NASA Astrophysics Data System (ADS)

    Jiang, Cheng; Zhu, Ka-Di

    2013-01-01

    We present a scheme for photonic transistors based on photons and phonons in a cavity electromechanical system, which is composed of a superconducting microwave cavity coupled to a nanomechanical resonator. Control of the propagation of photons is achieved through the interaction of microwave field (photons) and nanomechanical vibrations (phonons). By calculating the transmission spectrum of the signal field, we show that the signal field can be efficiently attenuated or amplified, depending on the power of a second ‘gating’ (pump) field. This scheme may be a promising candidate for single-photon transistors and pave the way for numerous applications in telecommunication and quantum information technologies.

  4. Energy use in repairs by cover concrete replacement or silane treatment for extending service life of chloride-exposed concrete structures

    NASA Astrophysics Data System (ADS)

    Petcherdchoo, A.

    2018-05-01

    In this study, the service life of repaired concrete structures under chloride environment is predicted. This prediction is performed by considering the mechanism of chloride ion diffusion using the partial differential equation (PDE) of the Fick’s second law. The one-dimensional PDE cannot simply be solved, when concrete structures are cyclically repaired with cover concrete replacement or silane treatment. The difficulty is encountered in solving position-dependent chloride profile and diffusion coefficient after repairs. In order to remedy the difficulty, the finite difference method is used. By virtue of numerical computation, the position-dependent chloride profile can be treated position by position. And, based on the Crank-Nicolson scheme, a proper formulation embedded with position-dependent diffusion coefficient can be derived. By using the aforementioned idea, position- and time-dependent chloride ion concentration profiles for concrete structures with repairs can be calculated and shown, and their service life can be predicted. Moreover, the use of energy in different repair actions is also considered for comparison. From the study, it is found that repairs can control rebar corrosion and/or concrete cracking depending on repair actions.

  5. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of discontinuous propagating shocks with simultaneous resolution of smooth yet complex small scale unsteady flow features to an exceptional detail.

  6. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  7. Ab initio characterization of electron transfer coupling in photoinduced systems: generalized Mulliken-Hush with configuration-interaction singles.

    PubMed

    Chen, Hung-Cheng; Hsu, Chao-Ping

    2005-12-29

    To calculate electronic couplings for photoinduced electron transfer (ET) reactions, we propose and test the use of ab initio quantum chemistry calculation for excited states with the generalized Mulliken-Hush (GMH) method. Configuration-interaction singles (CIS) is proposed to model the locally excited (LE) and charge-transfer (CT) states. When the CT state couples with other high lying LE states, affecting coupling values, the image charge approximation (ICA), as a simple solvent model, can lower the energy of the CT state and decouple the undesired high-lying local excitations. We found that coupling strength is weakly dependent on many details of the solvent model, indicating the validity of the Condon approximation. Therefore, a trustworthy value can be obtained via this CIS-GMH scheme, with ICA used as a tool to improve and monitor the quality of the results. Systems we tested included a series of rigid, sigma-linked donor-bridge-acceptor compounds where "through-bond" coupling has been previously investigated, and a pair of molecules where "through-space" coupling was experimentally demonstrated. The calculated results agree well with experimentally inferred values in the coupling magnitudes (for both systems studied) and in the exponential distance dependence (for the through-bond series). Our results indicate that this new scheme can properly account for ET coupling arising from both through-bond and through-space mechanisms.

  8. Maximum caliber inference of nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Otten, Moritz; Stock, Gerhard

    2010-07-01

    Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.

  9. Calculating the costs of work-based training: the case of NHS Cadet Schemes.

    PubMed

    Norman, Ian; Normand, Charles; Watson, Roger; Draper, Jan; Jowett, Sandra; Coster, Samantha

    2008-09-01

    The worldwide shortage of registered nurses [Buchan, J., Calman, L., 2004. The Global Shortage of Registered Nurses: An Overview of Issues And Actions. International Council of Nurses, Geneva] points to the need for initiatives which increase access to the profession, in particular, to those sections of the population who traditionally do not enter nursing. This paper reports findings on the costs associated with one such initiative, the British National Health Service (NHS) Cadet Scheme, designed to provide a mechanism for entry into nurse training for young people without conventional academic qualifications. The paper illustrates an approach to costing work-based learning interventions which offsets the value attributed to trainees' work against their training costs. To provide a preliminary evaluation of the cost of the NHS Cadet Scheme initiative. Questionnaire survey of the leaders of all cadet schemes in England (n=62, 100% response) in December 2002 to collect financial information and data on progression of cadets through the scheme, and a follow-up questionnaire survey of the same scheme leaders to improve the quality of information, which was completed in January 2004 (n=56, 59% response). The mean cost of producing a cadet to progress successfully through the scheme and onto a pre-registration nursing programme depends substantially on the value of their contribution to healthcare work during training and the progression rate of students through the scheme. The findings from this evaluation suggest that these factors varied very widely across the 62 schemes. Established schemes have, on average, lower attrition and higher progression rates than more recently established schemes. Using these rates, we estimate that on maturity, a cadet scheme will progress approximately 60% of students into pre-registration nurse training. As comparative information was not available from similar initiatives that provide access to nurse training, it was not possible to calculate the cost effectiveness of NHS Cadet Schemes. However, this study does show that those cadet schemes which have the potential to offer better value for money, are those where the progression rates are good and where the practical training of cadets is organised such that cadets meet the needs of patients which might otherwise have to be met by non-professionally qualified staff.

  10. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  11. Charge and spin diffusion on the metallic side of the metal-insulator transition: A self-consistent approach

    NASA Astrophysics Data System (ADS)

    Wellens, Thomas; Jalabert, Rodolfo A.

    2016-10-01

    We develop a self-consistent theory describing the spin and spatial electron diffusion in the impurity band of doped semiconductors under the effect of a weak spin-orbit coupling. The resulting low-temperature spin-relaxation time and diffusion coefficient are calculated within different schemes of the self-consistent framework. The simplest of these schemes qualitatively reproduces previous phenomenological developments, while more elaborate calculations provide corrections that approach the values obtained in numerical simulations. The results are universal for zinc-blende semiconductors with electron conductance in the impurity band, and thus they are able to account for the measured spin-relaxation times of materials with very different physical parameters. From a general point of view, our theory opens a new perspective for describing the hopping dynamics in random quantum networks.

  12. Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.

    PubMed

    Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim

    2017-12-12

    In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.

  13. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  14. Modified unified kinetic scheme for all flow regimes.

    PubMed

    Liu, Sha; Zhong, Chengwen

    2012-06-01

    A modified unified kinetic scheme for the prediction of fluid flow behaviors in all flow regimes is described. The time evolution of macrovariables at the cell interface is calculated with the idea that both free transport and collision mechanisms should be considered. The time evolution of macrovariables is obtained through the conservation constraints. The time evolution of local Maxwellian distribution is obtained directly through the one-to-one mapping from the evolution of macrovariables. These improvements provide more physical realities in flow behaviors and more accurate numerical results in all flow regimes especially in the complex transition flow regime. In addition, the improvement steps introduce no extra computational complexity.

  15. Investigation of supersonic chemically reacting and radiating channel flow

    NASA Technical Reports Server (NTRS)

    Mani, Mortaza; Tiwari, Surendra N.

    1988-01-01

    The 2-D time-dependent Navier-Stokes equations are used to investigate supersonic flows undergoing finite rate chemical reaction and radiation interaction for a hydrogen-air system. The explicit multistage finite volume technique of Jameson is used to advance the governing equations in time until convergence is achieved. The chemistry source term in the species equation is treated implicitly to alleviate the stiffness associated with fast reactions. The multidimensional radiative transfer equations for a nongray model are provided for a general configuration and then reduced for a planar geometry. Both pseudo-gray and nongray models are used to represent the absorption-emission characteristics of the participating species. The supersonic inviscid and viscous, nonreacting flows are solved by employing the finite volume technique of Jameson and the unsplit finite difference scheme of MacCormack. The specified problem considered is of the flow in a channel with a 10 deg compression-expansion ramp. The calculated results are compared with those of an upwind scheme. The problem of chemically reacting and radiating flows are solved for the flow of premixed hydrogen-air through a channel with parallel boundaries, and a channel with a compression corner. Results obtained for specific conditions indicate that the radiative interaction can have a significant influence on the entire flow field.

  16. Very Efficient High-order Hyperbolic Schemes for Time-dependent Advection Diffusion Problems: Third-, Fourth-, and Sixth-order

    DTIC Science & Technology

    2014-07-07

    boundary condition (x ¼ 7p =2; j ¼ 2p; U ¼ 1; m ¼ 1) on N ¼ 10 uniform nodes (Dt ¼ 0:01.) Table 10 Unsteady linear advection–diffusion problem with periodic...500 3rd 55 2 4th 55 2 6th 55 2 1000 3rd 116 2 4th 116 2 6th 116 2 Table 11 Unsteady linear advection–diffusion problem with oscillatory BC (x ¼ 7p =2; a...dependent problem with oscillatory BC (x ¼ 7p =2; a ¼ 1.) using the third-order RD-GT scheme with the BDF3 time discretization. Number of nodes Dt (BDF3

  17. Coalescent: an open-science framework for importance sampling in coalescent theory.

    PubMed

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.

  18. The a(4) Scheme-A High Order Neutrally Stable CESE Solver

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2009-01-01

    The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a nondissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To provide a solid foundation for a systematic CESE development of high order schemes, in this paper we describe a new high order (4-5th order) and neutrally stable CESE solver of a 1D advection equation with a constant advection speed a. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and two points at the lower time level. Because it is associated with four independent mesh variables (the numerical analogues of the dependent variable and its first, second, and third-order spatial derivatives) and four equations per mesh point, the new scheme is referred to as the a(4) scheme. As in the case of other similar CESE neutrally stable solvers, the a(4) scheme enforces conservation laws in space-time locally and globally, and it has the basic, forward marching, and backward marching forms. Except for a singular case, these forms are equivalent and satisfy a space-time inversion (STI) invariant property which is shared by the advection equation. Based on the concept of STI invariance, a set of algebraic relations is developed and used to prove the a(4) scheme must be neutrally stable when it is stable. Numerically, it has been established that the scheme is stable if the value of the Courant number is less than 1/3

  19. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  20. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  1. The Adler D-function for N = 1 SQCD regularized by higher covariant derivatives in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.

    2018-01-01

    We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.

  2. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  3. On multigrid solution of the implicit equations of hydrodynamics. Experiments for the compressible Euler equations in general coordinates

    NASA Astrophysics Data System (ADS)

    Kifonidis, K.; Müller, E.

    2012-08-01

    Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.

  4. A Fourier spectral-discontinuous Galerkin method for time-dependent 3-D Schrödinger-Poisson equations with discontinuous potentials

    NASA Astrophysics Data System (ADS)

    Lu, Tiao; Cai, Wei

    2008-10-01

    In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.

  5. The Solution of Large Time-Dependent Problems Using Reduced Coordinates.

    DTIC Science & Technology

    1987-06-01

    numerical intergration schemes for dynamic problems, the algorithm known as Newmark’s Method. The behavior of the Newmark scheme, as well as the basic...T’he horizontal displacements at the mid-height and the bottom of the buildin- are shown in f igure 4. 13. The solution history illustrated is for a

  6. Performance Analyses and Improvements for the IEEE 802.15.4 CSMA/CA Scheme with Heterogeneous Buffered Conditions

    PubMed Central

    Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng

    2012-01-01

    Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes—OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)—are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results. PMID:22666076

  7. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.

  8. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  9. Application of a symmetric total variation diminishing scheme to aerodynamics of rotors

    NASA Astrophysics Data System (ADS)

    Usta, Ebru

    2002-09-01

    The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.

  10. A simple scheme for magnetic balance in four-component relativistic Kohn-Sham calculations of nuclear magnetic resonance shielding constants in a Gaussian basis.

    PubMed

    Olejniczak, Małgorzata; Bast, Radovan; Saue, Trond; Pecul, Magdalena

    2012-01-07

    We report the implementation of nuclear magnetic resonance (NMR) shielding tensors within the four-component relativistic Kohn-Sham density functional theory including non-collinear spin magnetization and employing London atomic orbitals to ensure gauge origin independent results, together with a new and efficient scheme for assuring correct balance between the large and small components of a molecular four-component spinor in the presence of an external magnetic field (simple magnetic balance). To test our formalism we have carried out calculations of NMR shielding tensors for the HX series (X = F, Cl, Br, I, At), the Xe atom, and the Xe dimer. The advantage of simple magnetic balance scheme combined with the use of London atomic orbitals is the fast convergence of results (when compared with restricted kinetic balance) and elimination of linear dependencies in the basis set (when compared to unrestricted kinetic balance). The effect of including spin magnetization in the description of NMR shielding tensor has been found important for hydrogen atoms in heavy HX molecules, causing an increase of isotropic values of 10%, but negligible for heavy atoms.

  11. A Linearized Prognostic Cloud Scheme in NASAs Goddard Earth Observing System Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Errico, Ronald M.; Gelaro, Ronald; Kim, Jong G.; Mahajan, Rahul

    2015-01-01

    A linearized prognostic cloud scheme has been developed to accompany the linearized convection scheme recently implemented in NASA's Goddard Earth Observing System data assimilation tools. The linearization, developed from the nonlinear cloud scheme, treats cloud variables prognostically so they are subject to linearized advection, diffusion, generation, and evaporation. Four linearized cloud variables are modeled, the ice and water phases of clouds generated by large-scale condensation and, separately, by detraining convection. For each species the scheme models their sources, sublimation, evaporation, and autoconversion. Large-scale, anvil and convective species of precipitation are modeled and evaporated. The cloud scheme exhibits linearity and realistic perturbation growth, except around the generation of clouds through large-scale condensation. Discontinuities and steep gradients are widely used here and severe problems occur in the calculation of cloud fraction. For data assimilation applications this poor behavior is controlled by replacing this part of the scheme with a perturbation model. For observation impacts, where efficiency is less of a concern, a filtering is developed that examines the Jacobian. The replacement scheme is only invoked if Jacobian elements or eigenvalues violate a series of tuned constants. The linearized prognostic cloud scheme is tested by comparing the linear and nonlinear perturbation trajectories for 6-, 12-, and 24-h forecast times. The tangent linear model performs well and perturbations of clouds are well captured for the lead times of interest.

  12. Traversing the folding pathway of proteins using temperature-aided cascade molecular dynamics with conformation-dependent charges.

    PubMed

    Jani, Vinod; Sonavane, Uddhavesh; Joshi, Rajendra

    2016-07-01

    Protein folding is a multi-micro second time scale event and involves many conformational transitions. Crucial conformational transitions responsible for biological functions of biomolecules are difficult to capture using current state-of-the-art molecular dynamics (MD) simulations. Protein folding, being a stochastic process, witnesses these transitions as rare events. Many new methodologies have been proposed for observing these rare events. In this work, a temperature-aided cascade MD is proposed as a technique for studying the conformational transitions. Folding studies for Engrailed homeodomain and Immunoglobulin domain B of protein A have been carried out. Using this methodology, the unfolded structures with RMSD of 20 Å were folded to a structure with RMSD of 2 Å. Three sets of cascade MD runs were carried out using implicit solvation, explicit solvation, and charge updation scheme. In the charge updation scheme, charges based on the conformation obtained are calculated and are updated in the topology file. In all the simulations, the structure of 2 Å was reached within a few nanoseconds using these methods. Umbrella sampling has been performed using snapshots from the temperature-aided cascade MD simulation trajectory to build an entire conformational transition pathway. The advantage of the method is that the possible pathways for a particular reaction can be explored within a short duration of simulation time and the disadvantage is that the knowledge of the start and end state is required. The charge updation scheme adds the polarization effects in the force fields. This improves the electrostatic interaction among the atoms, which may help the protein to fold faster.

  13. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  14. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  15. Temperature dependences of saturated vapor pressure and the enthalpy of vaporization of n-pentyl esters of dicarboxylic acids

    NASA Astrophysics Data System (ADS)

    Portnova, S. V.; Krasnykh, E. L.; Levanova, S. V.

    2016-05-01

    The saturated vapor pressures and enthalpies of vaporization of n-pentyl esters of linear C2-C6 dicarboxylic acids are determined by the transpiration method in the temperature range of 309.2-361.2 K. The dependences of enthalpies of vaporization on the number of carbon atoms in the molecule and on the retention indices have been determined. The predictive capabilities of the existing calculation schemes for estimation of enthalpy of vaporization of the studied compounds have been analyzed.

  16. A pHorseshoe

    NASA Astrophysics Data System (ADS)

    Plumsky, Roger

    1999-07-01

    Students often find pH calculations boring and irrelevant, and many memorize calculation schemes without understanding. It has been suggested that such topics should be dropped from the first-year chemistry course. I have found an approach that involves memorization but enables students of average ability to achieve understanding through its use and subsequent development of the topic. Providing students with a memorization scheme at the outset eliminates initial frustration and discouragement and leads to better understanding by the time the unit has been developed. Understanding is developed after the students have successfully calculated conversions between pH and hydrogen ion concentration. In this article the pHorseshoe is described not only for the purpose of sharing with other teachers who might find it useful but also to explain why, in this case at least, memorization is the servant of understanding and not a substitute for it.

  17. Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.

  18. Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.

    1989-01-01

    An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.

  19. Modulation limit of semiconductor lasers by some parametric modulation schemes

    NASA Astrophysics Data System (ADS)

    Iga, K.

    1985-07-01

    Using the simple rate equations and small signal analysis, the modulation speed limit of semiconductor lasers with modulation schemes such as gain switching, modulation of nonradiative recombination lifetime of minority carriers, and cavity Q modulation, is calculated and compared with the injection modulation scheme of Ikegami and Suematsu (1968). It is found that the maximum modulation frequency for the gain and Q modulation can exceed the resonance-like frequency by a factor equal to the coefficient of the time derivative of the modulation parameter, though the nonradiative lifetime modulation is not shown to be different from the injection modulation. A solution for the carrier lifetime modulation of LED is obtained, and the possibility of wideband modulation in this scheme is demonstrated.

  20. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  1. Transverse Momentum-Dependent Parton Distributions from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.

    Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.

  2. Transverse Momentum-Dependent Parton Distributions From Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer

    Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.

  3. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  4. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  5. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  6. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  7. Correlated electron-nuclear dynamics with conditional wave functions.

    PubMed

    Albareda, Guillermo; Appel, Heiko; Franco, Ignacio; Abedi, Ali; Rubio, Angel

    2014-08-22

    The molecular Schrödinger equation is rewritten in terms of nonunitary equations of motion for the nuclei (or electrons) that depend parametrically on the configuration of an ensemble of generally defined electronic (or nuclear) trajectories. This scheme is exact and does not rely on the tracing out of degrees of freedom. Hence, the use of trajectory-based statistical techniques can be exploited to circumvent the calculation of the computationally demanding Born-Oppenheimer potential-energy surfaces and nonadiabatic coupling elements. The concept of the potential-energy surface is restored by establishing a formal connection with the exact factorization of the full wave function. This connection is used to gain insight from a simplified form of the exact propagation scheme.

  8. Event-Based $H_\\infty $ State Estimation for Time-Varying Stochastic Dynamical Networks With State- and Disturbance-Dependent Noises.

    PubMed

    Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E

    2017-10-01

    In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.

  9. Diffusion of Conserved Charges in Relativistic Heavy Ion Collisions

    NASA Astrophysics Data System (ADS)

    Greif, Moritz; Fotakis, Jan. A.; Denicol, Gabriel S.; Greiner, Carsten

    2018-06-01

    We demonstrate that the diffusion currents do not depend only on gradients of their corresponding charge density, but that the different diffusion charge currents are coupled. This happens in such a way that it is possible for density gradients of a given charge to generate dissipative currents of another charge. Within this scheme, the charge diffusion coefficient is best viewed as a matrix, in which the diagonal terms correspond to the usual charge diffusion coefficients, while the off-diagonal terms describe the coupling between the different currents. In this Letter, we calculate for the first time the complete diffusion matrix for hot and dense nuclear matter, including baryon, electric, and strangeness charges. We find that the baryon diffusion current is strongly affected by baryon charge gradients but also by its coupling to gradients in strangeness. The electric charge diffusion current is found to be strongly affected by electric and strangeness gradients, whereas strangeness currents depend mostly on strange and baryon gradients.

  10. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  11. First-principles supercell calculations of small polarons with proper account for long-range polarization effects

    NASA Astrophysics Data System (ADS)

    Kokott, Sebastian; Levchenko, Sergey V.; Rinke, Patrick; Scheffler, Matthias

    2018-03-01

    We present a density functional theory (DFT) based supercell approach for modeling small polarons with proper account for the long-range elastic response of the material. Our analysis of the supercell dependence of the polaron properties (e.g., atomic structure, binding energy, and the polaron level) reveals long-range electrostatic effects and the electron–phonon (el–ph) interaction as the two main contributors. We develop a correction scheme for DFT polaron calculations that significantly reduces the dependence of polaron properties on the DFT exchange-correlation functional and the size of the supercell in the limit of strong el–ph coupling. Using our correction approach, we present accurate all-electron full-potential DFT results for small polarons in rocksalt MgO and rutile TiO2.

  12. Analysis/forecast experiments with a multivariate statistical analysis scheme using FGGE data

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    A three-dimensional, multivariate, statistical analysis method, optimal interpolation (OI) is described for modeling meteorological data from widely dispersed sites. The model was developed to analyze FGGE data at the NASA-Goddard Laboratory of Atmospherics. The model features a multivariate surface analysis over the oceans, including maintenance of the Ekman balance and a geographically dependent correlation function. Preliminary comparisons are made between the OI model and similar schemes employed at the European Center for Medium Range Weather Forecasts and the National Meteorological Center. The OI scheme is used to provide input to a GCM, and model error correlations are calculated for forecasts of 500 mb vertical water mixing ratios and the wind profiles. Comparisons are made between the predictions and measured data. The model is shown to be as accurate as a successive corrections model out to 4.5 days.

  13. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  14. Horizontal vectorization of electron repulsion integrals.

    PubMed

    Pritchard, Benjamin P; Chow, Edmond

    2016-10-30

    We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD  = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  16. Signatures of ecological processes in microbial community time series.

    PubMed

    Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie

    2018-06-28

    Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.

  17. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  18. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  19. Time-Efficient High-Rate Data Flooding in One-Dimensional Acoustic Underwater Sensor Networks

    PubMed Central

    Kwon, Jae Kyun; Seo, Bo-Min; Yun, Kyungsu; Cho, Ho-Shin

    2015-01-01

    Because underwater communication environments have poor characteristics, such as severe attenuation, large propagation delays and narrow bandwidths, data is normally transmitted at low rates through acoustic waves. On the other hand, as high traffic has recently been required in diverse areas, high rate transmission has become necessary. In this paper, transmission/reception timing schemes that maximize the time axis use efficiency to improve the resource efficiency for high rate transmission are proposed. The excellence of the proposed scheme is identified by examining the power distributions by node, rate bounds, power levels depending on the rates and number of nodes, and network split gains through mathematical analysis and numerical results. In addition, the simulation results show that the proposed scheme outperforms the existing packet train method. PMID:26528983

  20. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  1. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  2. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE PAGES

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    2017-10-24

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  3. Viscosity Relaxation in Molten HgZnTe

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua; Lehoczky, S. L.; Kim, Yeong Woo; Baird, James K.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Rotating cup measurements of the viscosity of the pseudo-binary melt, HgZnTe have shown that the isothermal liquid with zinc mole fraction 0.16 requires tens of hours of equilibration time before a steady viscous state can be achieved. Over this relaxation period, the viscosity at 790 C increases by a factor of two, while the viscosity at 810 C increases by 40%. Noting that the Group VI elements tend to polymerize when molten, we suggest that the viscosity of the melt is enhanced by the slow formation of Te atom chains. To explain the build-up of linear Te n-mers, we propose a scheme, which contains formation reactions with second order kinetics that increase the molecular weight, and decomposition reactions with first order kinetics that inactivate the chains. The resulting rate equations can be solved for the time dependence of each molecular weight fraction. Using these molecular weight fractions, we calculate the time dependence of the average molecular weight. Using the standard semi-empirical relation between polymer average molecular weight and viscosity, we then calculate the viscosity relaxation curve. By curve fitting, we find that the data imply that the rate constant for n-mer formation is much smaller than the rate constant for n-mer deactivation, suggesting that Te atoms only weakly polymerize in molten HgZnTe. The steady state toward which the melt relaxes occurs as the rate of formation of an n-mer becomes exactly balanced by the sum of the rate for its deactivation and the rate for its polymerization to form an (n+1)-mer.

  4. Towards a Low-Cost Remote Memory Attestation for the Smart Grid

    PubMed Central

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-01-01

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes. PMID:26307998

  5. Towards a Low-Cost Remote Memory Attestation for the Smart Grid.

    PubMed

    Yang, Xinyu; He, Xiaofei; Yu, Wei; Lin, Jie; Li, Rui; Yang, Qingyu; Song, Houbing

    2015-08-21

    In the smart grid, measurement devices may be compromised by adversaries, and their operations could be disrupted by attacks. A number of schemes to efficiently and accurately detect these compromised devices remotely have been proposed. Nonetheless, most of the existing schemes detecting compromised devices depend on the incremental response time in the attestation process, which are sensitive to data transmission delay and lead to high computation and network overhead. To address the issue, in this paper, we propose a low-cost remote memory attestation scheme (LRMA), which can efficiently and accurately detect compromised smart meters considering real-time network delay and achieve low computation and network overhead. In LRMA, the impact of real-time network delay on detecting compromised nodes can be eliminated via investigating the time differences reported from relay nodes. Furthermore, the attestation frequency in LRMA is dynamically adjusted with the compromised probability of each node, and then, the total number of attestations could be reduced while low computation and network overhead can be achieved. Through a combination of extensive theoretical analysis and evaluations, our data demonstrate that our proposed scheme can achieve better detection capacity and lower computation and network overhead in comparison to existing schemes.

  6. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  7. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  8. A far-field non-reflecting boundary condition for two-dimensional wake flows

    NASA Technical Reports Server (NTRS)

    Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli

    1995-01-01

    Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.

  9. In vivo measurement of the longitudinal relaxation time of arterial blood (T1a) in the mouse using a pulsed arterial spin labeling approach.

    PubMed

    Thomas, David L; Lythgoe, Mark F; Gadian, David G; Ordidge, Roger J

    2006-04-01

    A novel method for measuring the longitudinal relaxation time of arterial blood (T1a) is presented. Knowledge of T1a is essential for accurately quantifying cerebral perfusion using arterial spin labeling (ASL) techniques. The method is based on the flow-sensitive alternating inversion recovery (FAIR) pulsed ASL (PASL) approach. We modified the standard FAIR acquisition scheme by incorporating a global saturation pulse at the beginning of the recovery period. With this approach the FAIR tissue signal difference has a simple monoexponential dependence on the recovery time, with T1a as the time constant. Therefore, FAIR measurements performed over a range of recovery times can be fitted to a monoexponential recovery curve and T1a can be calculated directly. This eliminates many of the difficulties associated with the measurement of T1a. Experiments performed in vivo in the mouse at 2.35T produced a mean value of 1.51 s for T1a, consistent with previously published values. (c) 2006 Wiley-Liss, Inc.

  10. Two moment dust and water ice in the MarsWRF GCM

    NASA Astrophysics Data System (ADS)

    Lee, Christopher; Richardson, Mark I.; Newman, Claire E.; Mischna, Michael A.

    2016-10-01

    A new two moment dust and water ice microphysics scheme has been developed for the MarsWRF General Circulation Model based on the Morrison and Gettelman (2008) scheme, and includes temperature dependent nucleation processes and energetically constrained condensation and evaporation. Dust consumed in the formation of water ice is also tracked by the model.The two moment dust scheme simulates dust particles in the Martian atmosphere using a Gamma distribution with fixed radius for lifted particles. Within the atmosphere the particle distribution is advected and sedimented within the two moment framework, obviating the requirement for lossy conversion between the continuous Gamma distribution and discritized bins found in some Mars microphysics schemes. Water ice is simulated using the same Gamma distribution and advected and sedimented in the same way. Water ice nucleation occurs heterogeneously onto dust particles with temperature dependent contact parameters (e.g. Trainer et al., 2009) and condensation and evaporation follows energetic constraints (e.g. Pruppacher and Klett, 1980; Montmessin et al., 2002) allowing water ice particles to grow in size where necessary. Dust particles are tracked within the ice cores as nucleation occurs, and dust cores advect and sediment along with their parent ice particle distributions. Radiative properties of dust and water particles are calculated as a function of the effective radius of the particles and the distribution width. The new microphysics scheme requires 5 tracers to be tracked as the moments of the dust, water ice, and ice core. All microphysical processes are simulated entirely within the two moment framework without any discretization of particle sizes.The effect of this new microphysics scheme on dust and water ice cloud distribution will be discussed and compared with observations from TES and MCS.

  11. Applying the Explicit Time Central Difference Method for Numerical Simulation of the Dynamic Behavior of Elastoplastic Flexible Reinforced Plates

    NASA Astrophysics Data System (ADS)

    Yankovskii, A. P.

    2017-12-01

    Based on a stepwise algorithm involving central finite differences for the approximation in time, a mathematical model is developed for elastoplastic deformation of cross-reinforced plates with isotropically hardening materials of components of the composition. The model allows obtaining the solution of elastoplastic problems at discrete points in time by an explicit scheme. The initial boundary value problem of the dynamic behavior of flexible plates reinforced in their own plane is formulated in the von Kármán approximation with allowance for their weakened resistance to the transverse shear. With a common approach, the resolving equations corresponding to two variants of the Timoshenko theory are obtained. An explicit "cross" scheme for numerical integration of the posed initial boundary value problem has been constructed. The scheme is consistent with the incremental algorithm used for simulating the elastoplastic behavior of a reinforced medium. Calculations of the dynamic behavior have been performed for elastoplastic cylindrical bending of differently reinforced fiberglass rectangular elongated plates. It is shown that the reinforcement structure significantly affects their elastoplastic dynamic behavior. It has been found that the classical theory of plates is as a rule unacceptable for carrying out the required calculations (except for very thin plates), and the first version of the Timoshenko theory yields reasonable results only in cases of relatively thin constructions reinforced by lowmodulus fibers. Proceeding from the results of the work, it is recommended to use the second variant of the Timoshenko theory (as a more accurate one) for calculations of the elastoplastic behavior of reinforced plates.

  12. Fast local-MP2 method with density-fitting for crystals. II. Test calculations and application to the carbon dioxide crystal

    NASA Astrophysics Data System (ADS)

    Usvyat, Denis; Maschio, Lorenzo; Manby, Frederick R.; Casassa, Silvia; Schütz, Martin; Pisani, Cesare

    2007-08-01

    A density fitting scheme for calculating electron repulsion integrals used in local second order Møller-Plesset perturbation theory for periodic systems (DFP) is presented. Reciprocal space techniques are systematically adopted, for which the use of Poisson fitting functions turned out to be instrumental. The role of the various parameters (truncation thresholds, density of the k net, Coulomb versus overlap metric, etc.) on computational times and accuracy is explored, using as test cases primitive-cell- and conventional-cell-diamond, proton-ordered ice, crystalline carbon dioxide, and a three-layer slab of magnesium oxide. Timings and results obtained when the electron repulsion integrals are calculated without invoking the DFP approximation, are taken as the reference. It is shown that our DFP scheme is both accurate and very efficient once properly calibrated. The lattice constant and cohesion energy of the CO2 crystal are computed to illustrate the capabilities of providing a physically correct description also for weakly bound crystals, in strong contrast to present density functional approaches.

  13. Further analytical study of hybrid rocket combustion

    NASA Technical Reports Server (NTRS)

    Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.

    1972-01-01

    Analytical studies of the transient and steady-state combustion processes in a hybrid rocket system are discussed. The particular system chosen consists of a gaseous oxidizer flowing within a tube of solid fuel, resulting in a heterogeneous combustion. Finite rate chemical kinetics with appropriate reaction mechanisms were incorporated in the model. A temperature dependent Arrhenius type fuel surface regression rate equation was chosen for the current study. The governing mathematical equations employed for the reacting gas phase and for the solid phase are the general, two-dimensional, time-dependent conservation equations in a cylindrical coordinate system. Keeping the simplifying assumptions to a minimum, these basic equations were programmed for numerical computation, using two implicit finite-difference schemes, the Lax-Wendroff scheme for the gas phase, and, the Crank-Nicolson scheme for the solid phase.

  14. An adaptive moving finite volume scheme for modeling flood inundation over dry and complex topography

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui

    2013-04-01

    A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.

  15. In-beam γ -ray spectroscopy of Mn 63

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baugher, T.; Gade, A.; Janssens, R. V. F.

    2016-01-01

    Background: Neutron-rich, even-mass chromium and iron isotopes approaching neutron number N = 40 have been important benchmarks in the development of shell-model effective interactions incorporating the effects of shell evolution in the exotic regime. Odd-mass manganese nuclei have received less attention, but provide important and complementary sensitivity to these interactions. Purpose: We report the observation of two new γ -ray transitions in 63 Mn , which establish the ( 9 / 2 - ) and ( 11 / 2 - ) levels on top of the previously known ( 7 / 2 - ) first-excited state. The lifetime for themore » ( 7 / 2 - ) and ( 9 / 2 - ) excited states were determined for the first time, while an upper limit could be established for the ( 11 / 2 - ) level. Method: Excited states in 63 Mn have been populated in inelastic scattering from a 9 Be target and in the fragmentation of 65 Fe . γ γ coincidence relationships were used to establish the decay level scheme. A Doppler line-shape analysis for the Doppler-broadened ( 7 / 2 - ) → 5 / 2 - , ( 9 / 2 - ) → ( 7 / 2 - ) , and ( 11 / 2 - ) → ( 9 / 2 - ) transitions was used to determine (limits for) the corresponding excited-state lifetimes. Results: The low-lying level scheme and the excited-state lifetimes were compared with large-scale shell-model calculations using different model spaces and effective interactions in order to isolate important aspects of shell evolution in this region of structural change. Conclusions: While the theoretical ( 7 / 2 - ) and ( 9 / 2 - ) excitation energies show little dependence on the model space, the calculated lifetime of the ( 7 / 2 - ) level and calculated energy of the ( 11 / 2 - ) level reveal the importance of including the neutron g 9 / 2 and d 5 / 2 orbitals in the model space. The LNPS effective shell-model interaction provides the best overall agreement with the new data.« less

  16. On the impact of topography and building mask on time varying gravity due to local hydrology

    NASA Astrophysics Data System (ADS)

    Deville, S.; Jacob, T.; Chéry, J.; Champollion, C.

    2013-01-01

    We use 3 yr of surface absolute gravity measurements at three sites on the Larzac plateau (France) to quantify the changes induced by topography and the building on gravity time-series, with respect to an idealized infinite slab approximation. Indeed, local topography and buildings housing ground-based gravity measurement have an effect on the distribution of water storage changes, therefore affecting the associated gravity signal. We first calculate the effects of surrounding topography and building dimensions on the gravity attraction for a uniform layer of water. We show that a gravimetric interpretation of water storage change using an infinite slab, the so-called Bouguer approximation, is generally not suitable. We propose to split the time varying gravity signal in two parts (1) a surface component including topographic and building effects (2) a deep component associated to underground water transfer. A reservoir modelling scheme is herein presented to remove the local site effects and to invert for the effective hydrological properties of the unsaturated zone. We show that effective time constants associated to water transfer vary greatly from site to site. We propose that our modelling scheme can be used to correct for the local site effects on gravity at any site presenting a departure from a flat topography. Depending on sites, the corrected signal can exceed measured values by 5-15 μGal, corresponding to 120-380 mm of water using the Bouguer slab formula. Our approach only requires the knowledge of daily precipitation corrected for evapotranspiration. Therefore, it can be a useful tool to correct any kind of gravimetric time-series data.

  17. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  18. A robust cooperative spectrum sensing scheme based on Dempster-Shafer theory and trustworthiness degree calculation in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru

    2014-12-01

    Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.

  19. DYNAMICAL ACCRETION OF PRIMORDIAL ATMOSPHERES AROUND PLANETS WITH MASSES BETWEEN 0.1 AND 5 M {sub ⊕} IN THE HABITABLE ZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stökl, Alexander; Dorfi, Ernst A.; Johnstone, Colin P.

    2016-07-10

    In the early, disk-embedded phase of evolution of terrestrial planets, a protoplanetary core can accumulate gas from the circumstellar disk into a planetary envelope. In order to relate the accumulation and structure of this primordial atmosphere to the thermal evolution of the planetary core, we calculated atmosphere models characterized by the surface temperature of the core. We considered cores with masses between 0.1 and 5 M {sub ⊕} situated in the habitable zone around a solar-like star. The time-dependent simulations in 1D-spherical symmetry include the hydrodynamics equations, gray radiative transport, and convective energy transport. Using an implicit time integration scheme,more » we can use large time steps and and thus efficiently cover evolutionary timescales. Our results show that planetary atmospheres, when considered with reference to a fixed core temperature, are not necessarily stable, and multiple solutions may exist for one core temperature. As the structure and properties of nebula-embedded planetary atmospheres are an inherently time-dependent problem, we calculated estimates for the amount of primordial atmosphere by simulating the accretion process of disk gas onto planetary cores and the subsequent evolution of the embedded atmospheres. The temperature of the planetary core is thereby determined from the computation of the internal energy budget of the core. For cores more massive than about one Earth mass, we obtain that a comparatively short duration of the disk-embedded phase (∼10{sup 5} years) is sufficient for the accumulation of significant amounts of hydrogen atmosphere that are unlikely to be removed by later atmospheric escape processes.« less

  20. Communication: Estimating the initial biasing potential for λ-local-elevation umbrella-sampling (λ-LEUS) simulations via slow growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieler, Noah S.; Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch

    2014-11-28

    In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006–3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires “filling up”more » all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.« less

  1. Communication: Estimating the initial biasing potential for λ-local-elevation umbrella-sampling (λ-LEUS) simulations via slow growth

    NASA Astrophysics Data System (ADS)

    Bieler, Noah S.; Hünenberger, Philippe H.

    2014-11-01

    In a recent article [Bieler et al., J. Chem. Theory Comput. 10, 3006-3022 (2014)], we introduced a combination of the λ-dynamics (λD) approach for calculating alchemical free-energy differences and of the local-elevation umbrella-sampling (LEUS) memory-based biasing method to enhance the sampling along the alchemical coordinate. The combined scheme, referred to as λ-LEUS, was applied to the perturbation of hydroquinone to benzene in water as a test system, and found to represent an improvement over thermodynamic integration (TI) in terms of sampling efficiency at equivalent accuracy. However, the preoptimization of the biasing potential required in the λ-LEUS method requires "filling up" all the basins in the potential of mean force. This introduces a non-productive pre-sampling time that is system-dependent, and generally exceeds the corresponding equilibration time in a TI calculation. In this letter, a remedy is proposed to this problem, termed the slow growth memory guessing (SGMG) approach. Instead of initializing the biasing potential to zero at the start of the preoptimization, an approximate potential of mean force is estimated from a short slow growth calculation, and its negative used to construct the initial memory. Considering the same test system as in the preceding article, it is shown that of the application of SGMG in λ-LEUS permits to reduce the preoptimization time by about a factor of four.

  2. Computing UV/vis spectra using a combined molecular dynamics and quantum chemistry approach: bis-triazin-pyridine (BTP) ligands studied in solution.

    PubMed

    Höfener, Sebastian; Trumm, Michael; Koke, Carsten; Heuser, Johannes; Ekström, Ulf; Skerencak-Frech, Andrej; Schimmelpfennig, Bernd; Panak, Petra J

    2016-03-21

    We report a combined computational and experimental study to investigate the UV/vis spectra of 2,6-bis(5,6-dialkyl-1,2,4-triazin-3-yl)pyridine (BTP) ligands in solution. In order to study molecules in solution using theoretical methods, force-field parameters for the ligand-water interaction are adjusted to ab initio quantum chemical calculations. Based on these parameters, molecular dynamics (MD) simulations are carried out from which snapshots are extracted as input to quantum chemical excitation-energy calculations to obtain UV/vis spectra of BTP ligands in solution using time-dependent density functional theory (TDDFT) employing the Tamm-Dancoff approximation (TDA). The range-separated CAM-B3LYP functional is used to avoid large errors for charge-transfer states occurring in the electronic spectra. In order to study environment effects with theoretical methods, the frozen-density embedding scheme is applied. This computational procedure allows to obtain electronic spectra calculated at the (range-separated) DFT level of theory in solution, revealing solvatochromic shifts upon solvation of up to about 0.6 eV. Comparison to experimental data shows a significantly improved agreement compared to vacuum calculations and enables the analysis of relevant excitations for the line shape in solution.

  3. A General and Efficient Method for Incorporating Precise Spike Times in Globally Time-Driven Simulations

    PubMed Central

    Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus

    2010-01-01

    Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031

  4. Second- and third-order upwind difference schemes for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Yang, J. Y.

    1984-01-01

    Second- and third-order two time-level five-point explicit upwind-difference schemes are described for the numerical solution of hyperbolic systems of conservation laws and applied to the Euler equations of inviscid gas dynamics. Nonliner smoothing techniques are used to make the schemes total variation diminishing. In the method both hyperbolicity and conservation properties of the hyperbolic conservation laws are combined in a very natural way by introducing a normalized Jacobian matrix of the hyperbolic system. Entropy satisfying shock transition operators which are consistent with the upwind differencing are locally introduced when transonic shock transition is detected. Schemes thus constructed are suitable for shockcapturing calculations. The stability and the global order of accuracy of the proposed schemes are examined. Numerical experiments for the inviscid Burgers equation and the compressible Euler equations in one and two space dimensions involving various situations of aerodynamic interest are included and compared.

  5. Frequency response control of semiconductor laser by using hybrid modulation scheme.

    PubMed

    Mieda, Shigeru; Yokota, Nobuhide; Isshiki, Ryuto; Kobayashi, Wataru; Yasaka, Hiroshi

    2016-10-31

    A hybrid modulation scheme that simultaneously applies the direct current modulation and intra-cavity loss modulation to a semiconductor laser is proposed. Both numerical calculations using rate equations and experiments using a fabricated laser show that the hybrid modulation scheme can control the frequency response of the laser by changing a modulation ratio and time delay between the two modulations. The modulation ratio and time delay provide the degree of signal mixing of the two modulations and an optimum condition is found when a non-flat frequency response for the intra-cavity loss modulation is compensated by that for the direct current modulation. We experimentally confirm a 8.64-dB improvement of the modulation sensitivity at 20 GHz compared with the pure direct current modulation with a 0.7-dB relaxation oscillation peak.

  6. Higher-order quantum-chromodynamic corrections to the longitudinal coefficient function in deep-inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, G.A.

    1982-01-01

    A calculation of nonsinglet longitudinal coefficient function of deep-inelastic scattering through order-g/sup 4/ is presented, using the operator-product expansion and the renormalization group. Both ultraviolet and infrared divergences are regulated with dimensional regularization. The renormalization scheme dependence of the result is discussed along with its phenomenological application in the determination of R = sigma/sub L//sigma/sub T/.

  7. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  8. Projector Augmented Wave formulation of orbital-dependent exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Xu, Xiao; Holzwarth, N. A. W.

    2012-02-01

    The use of orbital-dependent exchange-correlation functionals within electronic structure calculations has recently received renewed attention for improving the accuracy of the calculations, especially correcting self-interaction errors. Since the Projector Augmented Wave (PAW) methodootnotetext P. Bl"ochl, Phys. Rev. B 50, 17953 (1994). is an efficient pseudopotential-like scheme which ensures accurate evaluation of all multipole moments of direct and exchange Coulomb integrals, it is a natural choice for implementing orbital-dependent formalisms. Using Fock exchange as an example of an orbital-dependent functional, we developed the formulation and numerical implementation of the approximate optimized effective potential formalism of Kreiger, Li, and Iafrate (KLI)ootnotetext J. B. Krieger, Y. Li, and G. J. Iafrate Phys. Rev. A 45, 101 (1992). within the PAW method, comparing results with the analogous Hartree-Fock treatment.ootnotetext Xiao Xu and N. A. W. Holzwarth, Phys. Rev. B 81, 245105 (2010); 84, 155113 (2011). Test results are presented for ground state properties of two well-known materials -- diamond and LiF. This formalism can be extended to treat orbital-dependent functionals more generally.

  9. A fast iterative scheme for the linearized Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.

    2017-06-01

    Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.

  10. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  11. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messud, J.; Dinh, P. M.; Suraud, Eric

    2009-10-15

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  12. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    NASA Astrophysics Data System (ADS)

    Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric

    2009-10-01

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  13. Improvement of time-delayed feedback control by periodic modulation: analytical theory of Floquet mode control scheme.

    PubMed

    Just, Wolfram; Popovich, Svitlana; Amann, Andreas; Baba, Nilüfer; Schöll, Eckehard

    2003-02-01

    We investigate time-delayed feedback control schemes which are based on the unstable modes of the target state, to stabilize unstable periodic orbits. The periodic time dependence of these modes introduces an external time scale in the control process. Phase shifts that develop between these modes and the controlled periodic orbit may lead to a huge increase of the control performance. We illustrate such a feature on a nonlinear reaction diffusion system with global coupling and give a detailed investigation for the Rössler model. In addition we provide the analytical explanation for the observed control features.

  14. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle

    NASA Astrophysics Data System (ADS)

    Yen, Guan-Wei

    1989-08-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  15. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Yen, Guan-Wei

    1989-01-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  16. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    NASA Astrophysics Data System (ADS)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to systematically study the scale dependence of factorization for the simplest knockout process of deuteron electrodisintegration. We find that the extent of scale dependence depends strongly on kinematics, but in a systematic way. We find a relatively weak scale dependence at the quasi-free kinematics that gets progressively stronger as one moves away from the quasi-free region. Based on examination of the relevant overlap matrix elements, we are able to qualitatively explain and even predict the nature of scale dependence based on the kinematics under consideration.

  17. Macroscopic dielectric function within time-dependent density functional theory—Real time evolution versus the Casida approach

    NASA Astrophysics Data System (ADS)

    Sander, Tobias; Kresse, Georg

    2017-02-01

    Linear optical properties can be calculated by solving the time-dependent density functional theory equations. Linearization of the equation of motion around the ground state orbitals results in the so-called Casida equation, which is formally very similar to the Bethe-Salpeter equation. Alternatively one can determine the spectral functions by applying an infinitely short electric field in time and then following the evolution of the electron orbitals and the evolution of the dipole moments. The long wavelength response function is then given by the Fourier transformation of the evolution of the dipole moments in time. In this work, we compare the results and performance of these two approaches for the projector augmented wave method. To allow for large time steps and still rely on a simple difference scheme to solve the differential equation, we correct for the errors in the frequency domain, using a simple analytic equation. In general, we find that both approaches yield virtually indistinguishable results. For standard density functionals, the time evolution approach is, with respect to the computational performance, clearly superior compared to the solution of the Casida equation. However, for functionals including nonlocal exchange, the direct solution of the Casida equation is usually much more efficient, even though it scales less beneficial with the system size. We relate this to the large computational prefactors in evaluating the nonlocal exchange, which renders the time evolution algorithm fairly inefficient.

  18. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  19. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  20. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  1. A parallelization method for time periodic steady state in simulation of radio frequency sheath dynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun

    2017-10-01

    In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.

  2. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  3. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  4. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  5. Rigorous-two-Steps scheme of TRIPOLI-4® Monte Carlo code validation for shutdown dose rate calculation

    NASA Astrophysics Data System (ADS)

    Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime

    2017-09-01

    After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.

  6. Full dimensional (15-dimensional) quantum-dynamical simulation of the protonated water-dimer III: Mixed Jacobi-valence parametrization and benchmark results for the zero point energy, vibrationally excited states, and infrared spectrum.

    PubMed

    Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter

    2009-06-21

    Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.

  7. Towards information-optimal simulation of partial differential equations.

    PubMed

    Leike, Reimar H; Enßlin, Torsten A

    2018-03-01

    Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

  8. J dependence in the LSDA+U treatment of noncollinear magnets

    NASA Astrophysics Data System (ADS)

    Bousquet, Eric; Spaldin, Nicola

    2010-12-01

    We re-examine the commonly used density-functional theory plus Hubbard U (DFT+U) method for the case of noncollinear magnets. While many studies neglect to explicitly include the exchange-correction parameter J , or consider its exact value to be unimportant, here we show that in the case of noncollinear magnetism calculations the J parameter can strongly affect the magnetic ground state. We illustrate the strong J dependence of magnetic canting and magnetocrystalline anisotropy by calculating trends in the magnetic lithium orthophosphate family LiMPO4 ( M=Fe and Ni) and difluorite family MF2 ( M=Mn , Fe, Co, and Ni). Our results can be readily understood by expanding the usual DFT+U equations within the spinor scheme, in which the J parameter acts directly on the off-diagonal components which determine the spin canting.

  9. An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring

    NASA Technical Reports Server (NTRS)

    Buratynski, E. K.; Caughey, D. A.

    1984-01-01

    An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.

  10. Remodeling Pearson's Correlation for Functional Brain Network Estimation and Autism Spectrum Disorder Identification.

    PubMed

    Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang

    2017-01-01

    Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L 1 -norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.

  11. Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information.

    PubMed

    Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde

    2017-01-01

    In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Sensitivity of the s-process nucleosynthesis in AGB stars to the overshoot model

    NASA Astrophysics Data System (ADS)

    Goriely, S.; Siess, L.

    2018-01-01

    Context. S-process elements are observed at the surface of low- and intermediate-mass stars. These observations can be explained empirically by the so-called partial mixing of protons scenario leading to the incomplete operation of the CN cycle and a significant primary production of the neutron source. This scenario has been successful in qualitatively explaining the s-process enrichment in AGB stars. Even so, it remains difficult to describe both physically and numerically the mixing mechanisms taking place at the time of the third dredged-up between the convective envelope and the underlying C-rich radiative layer Aims: We aim to present new calculations of the s-process nucleosynthesis in AGB stars testing two different numerical implementations of chemical transport. These are based on a diffusion equation which depends on the second derivative of the composition and on a numerical algorithm where the transport of species depends linearly on the chemical gradient. Methods: The s-process nucleosynthesis resulting from these different mixing schemes is calculated with our stellar evolution code STAREVOL which has been upgraded to include an extended s-process network of 411 nuclei. Our investigation focuses on a fiducial 2 M⊙, [Fe/H] = -0.5 model star, but also includes four additional stars of different masses and metallicities. Results: We show that for the same set of parameters, the linear mixing approach produces a much larger 13C-pocket and consequently a substantially higher surface s-process enrichment compared to the diffusive prescription. Within the diffusive model, a quite extreme choice of parameters is required to account for surface s-process enrichment of 1-2 dex. These extreme conditions can not, however, be excluded at this stage. Conclusions: Both the diffusive and linear prescriptions of the overshoot mixing are suited to describe the s-process nucleosynthesis in AGB stars provided the profile of the diffusion coefficient below the convective envelope is carefully chosen. Both schemes give rise to relatively similar distributions of s-process elements, but depending on the parameters adopted, some differences may be obtained. These differences are in the element distribution, and most of all in the level of surface enrichment.

  13. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  14. Automatic spin-chain learning to explore the quantum speed limit

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Ming; Cui, Zi-Wei; Wang, Xin; Yung, Man-Hong

    2018-05-01

    One of the ambitious goals of artificial intelligence is to build a machine that outperforms human intelligence, even if limited knowledge and data are provided. Reinforcement learning (RL) provides one such possibility to reach this goal. In this work, we consider a specific task from quantum physics, i.e., quantum state transfer in a one-dimensional spin chain. The mission for the machine is to find transfer schemes with the fastest speeds while maintaining high transfer fidelities. The first scenario we consider is when the Hamiltonian is time independent. We update the coupling strength by minimizing a loss function dependent on both the fidelity and the speed. Compared with a scheme proven to be at the quantum speed limit for the perfect state transfer, the scheme provided by RL is faster while maintaining the infidelity below 5 ×10-4 . In the second scenario where a time-dependent external field is introduced, we convert the state transfer process into a Markov decision process that can be understood by the machine. We solve it with the deep Q-learning algorithm. After training, the machine successfully finds transfer schemes with high fidelities and speeds, which are faster than previously known ones. These results show that reinforcement learning can be a powerful tool for quantum control problems.

  15. QM/MM hybrid calculation of biological macromolecules using a new interface program connecting QM and MM engines

    NASA Astrophysics Data System (ADS)

    Hagiwara, Yohsuke; Ohta, Takehiro; Tateno, Masaru

    2009-02-01

    An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.

  16. FDTD simulation of EM wave propagation in 3-D media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T.; Tripp, A.C.

    1996-01-01

    A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less

  17. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  18. Towards next-to-next-to-leading-log accuracy for the width difference in the {B}_s-{\\overline{B}}_s system: fermionic contributions to order ( m c /m b )0 and ( m c /m b )1

    NASA Astrophysics Data System (ADS)

    Asatrian, H. M.; Hovhannisyan, A.; Nierste, U.; Yeghiazaryan, A.

    2017-10-01

    We calculate a class of three-loop Feynman diagrams which contribute to the next-to-next-to-leading logarithmic approximation for the width difference ΔΓ s in the {B}_s-{\\overline{B}}_s system. The considered diagrams contain a closed fermion loop in a gluon propagator and constitute the order α s 2 N f , where N f is the number of light quarks. Our results entail a considerable correction in that order, if ΔΓ s is expressed in terms of the pole mass of the bottom quark. If the \\overline{MS} scheme is used instead, the correction is much smaller. As a result, we find a decrease of the scheme dependence. Our result also indicates that the usually quoted value of the NLO renormalization scale dependence underestimates the perturbative error.

  19. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  20. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE PAGES

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook; ...

    2017-04-28

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  1. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph

    2016-08-07

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the wellmore » established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first benchmarked for vertical excitation energies of open- and closed-shell systems in comparison to other semi-empirical methods and applied to exemplary problems in electronic spectroscopy. As side products of the development, a robust and efficient valence electron TB method for the accurate determination of atomic charges as well as a more accurate calculation scheme of dipole rotatory strengths within the Tamm-Dancoff approximation is proposed.« less

  2. Three Dimensional Grid Generation for Complex Configurations - Recent Progress

    DTIC Science & Technology

    1988-03-01

    Navier/Stokes finite difference calculations currently of interest. It has been amply demonstrated that the viability of a numerical solution depends...such as advanced fighters or logistic transports, where a multiblock mesh, for example, is necessary. There exist numerous reports and books on the...MESHES I 3.10 ADAPTIVE GRID SCHEMES 10 3.11 REFERENCES 12 4. CONTRIBUTIONS 13 4.1 SOLICITATION AND OVERVIEW 13 4.2 LESSONS LEARNED IN THE MESH

  3. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.

  4. MATRIX-VBS (v1.0): Implementing an Evolving Organic Aerosol Volatility in an Aerosol Microphysics Model

    NASA Technical Reports Server (NTRS)

    Gao, Chloe Y.; Tsigaridis, Kostas; Bauer, Susanne E.

    2017-01-01

    The gas-particle partitioning and chemical aging of semi-volatile organic aerosol are presented in a newly developed box model scheme, where its effect on the growth, composition, and mixing state of particles is examined. The volatility-basis set (VBS) framework is implemented into the aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves mass and number aerosol concentrations and in multiple mixing-state classes. The new scheme, MATRIX-VBS, has the potential to significantly advance the representation of organic aerosols in Earth system models by improving upon the conventional representation as non-volatile particulate organic matter, often also with an assumed fixed size distribution. We present results from idealized cases representing Beijing, Mexico City, a Finnish forest, and a southeastern US forest, and investigate the evolution of mass concentrations and volatility distributions for organic species across the gas and particle phases, as well as assessing their mixing state among aerosol populations. Emitted semi-volatile primary organic aerosols evaporate almost completely in the intermediate-volatility range, while they remain in the particle phase in the low-volatility range. Their volatility distribution at any point in time depends on the applied emission factors, oxidation by OH radicals, and temperature. We also compare against parallel simulations with the original scheme, which represented only the particulate and non-volatile component of the organic aerosol, examining how differently the condensed-phase organic matter is distributed across the mixing states in the model. The results demonstrate the importance of representing organic aerosol as a semi-volatile aerosol, and explicitly calculating the partitioning of organic species between the gas and particulate phases.

  5. A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals

    NASA Technical Reports Server (NTRS)

    VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.

    2014-01-01

    A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.

  6. Four-electron model for singlet and triplet excitation energy transfers with inclusion of coherence memory, inelastic tunneling and nuclear quantum effects

    NASA Astrophysics Data System (ADS)

    Suzuki, Yosuke; Ebina, Kuniyoshi; Tanaka, Shigenori

    2016-08-01

    A computational scheme to describe the coherent dynamics of excitation energy transfer (EET) in molecular systems is proposed on the basis of generalized master equations with memory kernels. This formalism takes into account those physical effects in electron-bath coupling system such as the spin symmetry of excitons, the inelastic electron tunneling and the quantum features of nuclear motions, thus providing a theoretical framework to perform an ab initio description of EET through molecular simulations for evaluating the spectral density and the temporal correlation function of electronic coupling. Some test calculations have then been carried out to investigate the dependence of exciton population dynamics on coherence memory, inelastic tunneling correlation time, magnitude of electronic coupling, quantum correction to temporal correlation function, reorganization energy and energy gap.

  7. Determining a Method of Enabling and Disabling the Integral Torque in the SDO Science and Inertial Mode Controllers

    NASA Technical Reports Server (NTRS)

    Vess, Melissa F.; Starin, Scott R.

    2007-01-01

    During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.

  8. Analytical scheme calculations of angular momentum coupling and recoupling coefficients

    NASA Astrophysics Data System (ADS)

    Deveikis, A.; Kuznecovas, A.

    2007-03-01

    We investigate the Scheme programming language opportunities to analytically calculate the Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients that are used in the quantum theory of angular momentum. The considered coefficients are calculated by a direct evaluation of the sum formulas. The calculation results for large values of quantum angular momenta were compared with analogous calculations with FORTRAN and Java programming languages.

  9. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    NASA Astrophysics Data System (ADS)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  10. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  11. Ab initio molecular dynamics in a finite homogeneous electric field.

    PubMed

    Umari, P; Pasquarello, Alfredo

    2002-10-07

    We treat homogeneous electric fields within density functional calculations with periodic boundary conditions. A nonlocal energy functional depending on the applied field is used within an ab initio molecular dynamics scheme. The reliability of the method is demonstrated in the case of bulk MgO for the Born effective charges, and the high- and low-frequency dielectric constants. We evaluate the static dielectric constant by performing a damped molecular dynamics in an electric field and avoiding the calculation of the dynamical matrix. Application of this method to vitreous silica shows good agreement with experiment and illustrates its potential for systems of large size.

  12. Method of sections in analytical calculations of pneumatic tires

    NASA Astrophysics Data System (ADS)

    Tarasov, V. N.; Boyarkina, I. V.

    2018-01-01

    Analytical calculations in the pneumatic tire theory are more preferable in comparison with experimental methods. The method of section of a pneumatic tire shell allows to obtain equations of intensities of internal forces in carcass elements and bead rings. Analytical dependencies of intensity of distributed forces have been obtained in tire equator points, on side walls (poles) and pneumatic tire bead rings. Along with planes in the capacity of secant surfaces cylindrical surfaces are used for the first time together with secant planes. The tire capacity equation has been obtained using the method of section, by means of which a contact body is cut off from the tire carcass along the contact perimeter by the surface which is normal to the bearing surface. It has been established that the Laplace equation for the solution of tasks of this class of pneumatic tires contains two unknown values that requires the generation of additional equations. The developed computational schemes of pneumatic tire sections and new equations allow to accelerate the pneumatic tire structure improvement process during engineering.

  13. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  14. Summary of Simplified Two Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydrogen/Oxygen

    NASA Technical Reports Server (NTRS)

    Marek, C. John; Molnar, Melissa

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.

  15. An open-source framework for analyzing N-electron dynamics. II. Hybrid density functional theory/configuration interaction methodology.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe

    2017-10-30

    In this contribution, we extend our framework for analyzing and visualizing correlated many-electron dynamics to non-variational, highly scalable electronic structure method. Specifically, an explicitly time-dependent electronic wave packet is written as a linear combination of N-electron wave functions at the configuration interaction singles (CIS) level, which are obtained from a reference time-dependent density functional theory (TDDFT) calculation. The procedure is implemented in the open-source Python program detCI@ORBKIT, which extends the capabilities of our recently published post-processing toolbox (Hermann et al., J. Comput. Chem. 2016, 37, 1511). From the output of standard quantum chemistry packages using atom-centered Gaussian-type basis functions, the framework exploits the multideterminental structure of the hybrid TDDFT/CIS wave packet to compute fundamental one-electron quantities such as difference electronic densities, transient electronic flux densities, and transition dipole moments. The hybrid scheme is benchmarked against wave function data for the laser-driven state selective excitation in LiH. It is shown that all features of the electron dynamics are in good quantitative agreement with the higher-level method provided a judicious choice of functional is made. Broadband excitation of a medium-sized organic chromophore further demonstrates the scalability of the method. In addition, the time-dependent flux densities unravel the mechanistic details of the simulated charge migration process at a glance. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  17. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  18. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.

  19. Theoretical characterization of photoinduced electron transfer in rigidly linked donor-acceptor molecules: the fragment charge difference and the generalized Mulliken-Hush schemes

    NASA Astrophysics Data System (ADS)

    Lee, Sheng-Jui; Chen, Hung-Cheng; You, Zhi-Qiang; Liu, Kuan-Lin; Chow, Tahsin J.; Chen, I.-Chia; Hsu, Chao-Ping

    2010-10-01

    We calculate the electron transfer (ET) rates for a series of heptacyclo[6.6.0.02,6.03,13.014,11.05,9.010,14]-tetradecane (HCTD) linked donor-acceptor molecules. The electronic coupling factor was calculated by the fragment charge difference (FCD) [19] and the generalized Mulliken-Hush (GMH) schemes [20]. We found that the FCD is less prone to problems commonly seen in the GMH scheme, especially when the coupling values are small. For a 3-state case where the charge transfer (CT) state is coupled with two different locally excited (LE) states, we tested with the 3-state approach for the GMH scheme [30], and found that it works well with the FCD scheme. A simplified direct diagonalization based on Rust's 3-state scheme was also proposed and tested. This simplified scheme does not require a manual assignment of the states, and it yields coupling values that are largely similar to those from the full Rust's approach. The overall electron transfer (ET) coupling rates were also calculated.

  20. Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals

    NASA Astrophysics Data System (ADS)

    Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei

    2018-01-01

    Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.

  1. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  2. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  3. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    NASA Astrophysics Data System (ADS)

    Song, Jong-Won; Hirao, Kimihiko

    2015-07-01

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.

  4. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    2015-07-14

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less

  5. A convergent 2D finite-difference scheme for the Dirac–Poisson system and the simulation of graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinkman, D., E-mail: Daniel.Brinkman@asu.edu; School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287; Heitzinger, C., E-mail: Clemens.Heitzinger@asu.edu

    2014-01-15

    We present a convergent finite-difference scheme of second order in both space and time for the 2D electromagnetic Dirac equation. We apply this method in the self-consistent Dirac–Poisson system to the simulation of graphene. The model is justified for low energies, where the particles have wave vectors sufficiently close to the Dirac points. In particular, we demonstrate that our method can be used to calculate solutions of the Dirac–Poisson system where potentials act as beam splitters or Veselago lenses.

  6. A numerical method for simulating the dynamics of 3D axisymmetric vesicles suspended in viscous flows

    NASA Astrophysics Data System (ADS)

    Veerapaneni, Shravan K.; Gueyffier, Denis; Biros, George; Zorin, Denis

    2009-10-01

    We extend [Shravan K. Veerapaneni, Denis Gueyffier, Denis Zorin, George Biros, A boundary integral method for simulating the dynamics of inextensible vesicles suspended in a viscous fluid in 2D, Journal of Computational Physics 228(7) (2009) 2334-2353] to the case of three-dimensional axisymmetric vesicles of spherical or toroidal topology immersed in viscous flows. Although the main components of the algorithm are similar in spirit to the 2D case—spectral approximation in space, semi-implicit time-stepping scheme—the main differences are that the bending and viscous force require new analysis, the linearization for the semi-implicit schemes must be rederived, a fully implicit scheme must be used for the toroidal topology to eliminate a CFL-type restriction and a novel numerical scheme for the evaluation of the 3D Stokes single layer potential on an axisymmetric surface is necessary to speed up the calculations. By introducing these novel components, we obtain a time-scheme that experimentally is unconditionally stable, has low cost per time step, and is third-order accurate in time. We present numerical results to analyze the cost and convergence rates of the scheme. To verify the solver, we compare it to a constrained variational approach to compute equilibrium shapes that does not involve interactions with a viscous fluid. To illustrate the applicability of method, we consider a few vesicle-flow interaction problems: the sedimentation of a vesicle, interactions of one and three vesicles with a background Poiseuille flow.

  7. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    NASA Astrophysics Data System (ADS)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Moses; Kim, Keonhui; Muljadi, Eduard

    This paper proposes a torque limit-based inertial control scheme of a doubly-fed induction generator (DFIG) that supports the frequency control of a power system. If a frequency deviation occurs, the proposed scheme aims to release a large amount of kinetic energy (KE) stored in the rotating masses of a DFIG to raise the frequency nadir (FN). Upon detecting the event, the scheme instantly increases its output to the torque limit and then reduces the output with the rotor speed so that it converges to the stable operating range. To restore the rotor speed while causing a small second frequency dipmore » (SFD), after the rotor speed converges the power reference is reduced by a small amount and maintained until it meets the reference for maximum power point tracking control. The test results demonstrate that the scheme can improve the FN and maximum rate of change of frequency while causing a small SFD in any wind conditions and in a power system that has a high penetration of wind power, and thus the scheme helps maintain the required level of system reliability. The scheme releases the KE from 2.9 times to 3.7 times the Hydro-Quebec requirement depending on the power reference.« less

  9. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    PubMed

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  10. Evolution of Snow-Size Spectra in Cyclonic Storms. Part I: Snow Growth by Vapor Deposition and Aggregation.

    NASA Astrophysics Data System (ADS)

    Mitchell, David L.

    1988-11-01

    Based on the stochastic collection equation, height- and time-dependent snow growth models were developed for unrimed stratiform snowfall. Moment conservation equations were parameterized and solved by constraining the size distribution to be of the form N(D)dD = N0 exp(D)dD, yielding expressions for the slope parameter, , and the y-intercept parameters, NO, as functions of height or time. The processes of vapor deposition and aggregation were treated analytically without neglecting changes in ice crystal habits, while the ice particle breakup process was dealt with empirically.The models were compared against vertical profiles of snow-size spectra, obtained from aircraft measurements, for three case studies. The predicted spectra are in good agreement with the observed evolution of snow-size spectra in all three cases, indicating the proposed scheme for ice particle aggregation was successful. The temperature dependence of aggregation was assumed to result from differences in ice crystal habit. Using data from an earlier study, the aggregation efficiency between two levels in a cloud was calculated. Finally, other height-dependent, steady-state snowfall models in the literature were compared against spectra from one of the above case studies. The agreement between the predicted and observed spectra regarding these models was less favorable than was obtained from the models presented here.

  11. Conformal Electromagnetic Particle in Cell: A Review

    DOE PAGES

    Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...

    2015-10-26

    We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.

  12. Potential energy and dipole moment surfaces of the triplet states of the O2(X3Σg-) - O2(X3Σg-,a1Δg,b1Σg+) complex

    NASA Astrophysics Data System (ADS)

    Karman, Tijs; van der Avoird, Ad; Groenenboom, Gerrit C.

    2017-08-01

    We compute four-dimensional diabatic potential energy surfaces and transition dipole moment surfaces of O2-O2, relevant for the theoretical description of collision-induced absorption in the forbidden X3Σg- → a1Δg and X3Σg- → b1Σg+ bands at 7883 cm-1 and 13 122 cm-1, respectively. We compute potentials at the multi-reference configuration interaction (MRCI) level and dipole surfaces at the MRCI and complete active space self-consistent field (CASSCF) levels of theory. Potentials and dipole surfaces are transformed to a diabatic basis using a recent multiple-property-based diabatization algorithm. We discuss the angular expansion of these surfaces, derive the symmetry constraints on the expansion coefficients, and present working equations for determining the expansion coefficients by numerical integration over the angles. We also present an interpolation scheme with exponential extrapolation to both short and large separations, which is used for representing the O2-O2 distance dependence of the angular expansion coefficients. For the triplet ground state of the complex, the potential energy surface is in reasonable agreement with previous calculations, whereas global excited state potentials are reported here for the first time. The transition dipole moment surfaces are strongly dependent on the level of theory at which they are calculated, as is also shown here by benchmark calculations at high symmetry geometries. Therefore, ab initio calculations of the collision-induced absorption spectra cannot become quantitatively predictive unless more accurate transition dipole surfaces can be computed. This is left as an open question for method development in electronic structure theory. The calculated potential energy and transition dipole moment surfaces are employed in quantum dynamical calculations of collision-induced absorption spectra reported in Paper II [T. Karman et al., J. Chem. Phys. 147, 084307 (2017)].

  13. Potential energy and dipole moment surfaces of the triplet states of the O2(X3Σg-) - O2(X3Σg-,a1Δg,b1Σg+) complex.

    PubMed

    Karman, Tijs; van der Avoird, Ad; Groenenboom, Gerrit C

    2017-08-28

    We compute four-dimensional diabatic potential energy surfaces and transition dipole moment surfaces of O 2 -O 2 , relevant for the theoretical description of collision-induced absorption in the forbidden X 3 Σ g -  → a 1 Δ g and X 3 Σ g -  → b 1 Σ g + bands at 7883 cm -1 and 13 122 cm -1 , respectively. We compute potentials at the multi-reference configuration interaction (MRCI) level and dipole surfaces at the MRCI and complete active space self-consistent field (CASSCF) levels of theory. Potentials and dipole surfaces are transformed to a diabatic basis using a recent multiple-property-based diabatization algorithm. We discuss the angular expansion of these surfaces, derive the symmetry constraints on the expansion coefficients, and present working equations for determining the expansion coefficients by numerical integration over the angles. We also present an interpolation scheme with exponential extrapolation to both short and large separations, which is used for representing the O 2 -O 2 distance dependence of the angular expansion coefficients. For the triplet ground state of the complex, the potential energy surface is in reasonable agreement with previous calculations, whereas global excited state potentials are reported here for the first time. The transition dipole moment surfaces are strongly dependent on the level of theory at which they are calculated, as is also shown here by benchmark calculations at high symmetry geometries. Therefore, ab initio calculations of the collision-induced absorption spectra cannot become quantitatively predictive unless more accurate transition dipole surfaces can be computed. This is left as an open question for method development in electronic structure theory. The calculated potential energy and transition dipole moment surfaces are employed in quantum dynamical calculations of collision-induced absorption spectra reported in Paper II [T. Karman et al., J. Chem. Phys. 147, 084307 (2017)].

  14. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, C.; Max-Planck-Institute for Plasma Physics, Wendelsteinstr. 1, D-17491 Greifswald; Thakur, S. C.

    2016-04-15

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculationsmore » are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.« less

  15. An examination of the impact of Olson’s extinction on tetrapods from Texas

    PubMed Central

    2018-01-01

    It has been suggested that a transition between a pelycosaurian-grade synapsid dominated fauna of the Cisuralian (early Permian) and the therapsid dominated fauna of the Guadalupian (middle Permian) was accompanied by, and possibly driven by, a mass extinction dubbed Olson’s Extinction. However, this interpretation of the record has recently been criticised as being a result of inappropriate time-binning strategies: calculating species richness within international stages or substages combines extinctions occurring throughout the late Kungurian stage into a single event. To address this criticism, I examine the best record available for the time of the extinction, the tetrapod-bearing formations of Texas, at a finer stratigraphic scale than those previously employed. Species richness is calculated using four different time-binning schemes: the traditional Land Vertebrate Faunachrons (LVFs); a re-definition of the LVFs using constrained cluster analysis; individual formations treated as time bins; and a stochastic approach assigning specimens to half-million-year bins. Diversity is calculated at the genus and species level, both with and without subsampling, and extinction rates are also inferred. Under all time-binning schemes, both at the genus and species level, a substantial drop in diversity occurs during the Redtankian LVF. Extinction rates are raised above background rates throughout this time, but the biggest peak occurs in the Choza Formation (uppermost Redtankian), coinciding with the disappearance from the fossil record of several of amphibian clades. This study, carried out at a finer stratigraphic scale than previous examinations, indicates that Olson’s Extinction is not an artefact of the method used to bin data by time in previous analyses.

  16. MODIS Solar Calibration Simulation Assisted Refinement

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiaoxiong, Xiong; Guenther, Bruce; Barnes, William; Moyer, David; Salomonson, Vincent V.

    2004-01-01

    A detailed optical radiometric model has been created of the MODIS instruments solar calibration process. This model takes into account the orientation and distance of the spacecraft with respect to the sun, the correlated motions of the scan mirror and the sun, all of the optical elements, the detector locations on the visible and near IR focal planes, the solar diffuser and the attenuation screen with all of its hundreds of pinholes. An efficient computational scheme, takes into account all of these factors and has produced results which reproduce the observed time dependent intensity variations on the two focal planes with considerable fidelity. This agreement between predictions and observations, has given insight to the causes of some small time dependent variations and how to incorporate them into the overall calibration scheme. The radiometric model is described and modeled and actual measurements are presented and compared.

  17. Estimating the cost of compensating victims of medical negligence.

    PubMed Central

    Fenn, P.; Hermans, D.; Dingwall, R.

    1994-01-01

    The current system in Britain for compensating victims of medical injury depends on an assessment of negligence. Despite the sporadic pressure on the government to adopt a "no fault" approach, such as exists in Sweden, the negligence system will probably remain for the immediate future. The cost of this system was estimated to be 52.3m pounds for England 1990-1. The problem for the future, however, is one of forecasting accuracy at provider level: too high a guess and current patient care will suffer; too low a guess and future patient care will suffer. The introduction of a mutual insurance scheme may not resolve these difficulties, as someone will have to set the rates. Moreover, the figures indicate that if a no fault scheme was introduced the cost might be four times that of the current system, depending on the type of scheme adopted. PMID:8081145

  18. First-principles X-ray absorption dose calculation for time-dependent mass and optical density.

    PubMed

    Berejnov, Viatcheslav; Rubinstein, Boris; Melo, Lis G A; Hitchcock, Adam P

    2018-05-01

    A dose integral of time-dependent X-ray absorption under conditions of variable photon energy and changing sample mass is derived from first principles starting with the Beer-Lambert (BL) absorption model. For a given photon energy the BL dose integral D(e, t) reduces to the product of an effective time integral T(t) and a dose rate R(e). Two approximations of the time-dependent optical density, i.e. exponential A(t) = c + aexp(-bt) for first-order kinetics and hyperbolic A(t) = c + a/(b + t) for second-order kinetics, were considered for BL dose evaluation. For both models three methods of evaluating the effective time integral are considered: analytical integration, approximation by a function, and calculation of the asymptotic behaviour at large times. Data for poly(methyl methacrylate) and perfluorosulfonic acid polymers measured by scanning transmission soft X-ray microscopy were used to test the BL dose calculation. It was found that a previous method to calculate time-dependent dose underestimates the dose in mass loss situations, depending on the applied exposure time. All these methods here show that the BL dose is proportional to the exposure time D(e, t) ≃ K(e)t.

  19. Numerical Solution of Time-Dependent Problems with a Fractional-Power Elliptic Operator

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2018-03-01

    A time-dependent problem in a bounded domain for a fractional diffusion equation is considered. The first-order evolution equation involves a fractional-power second-order elliptic operator with Robin boundary conditions. A finite-element spatial approximation with an additive approximation of the operator of the problem is used. The time approximation is based on a vector scheme. The transition to a new time level is ensured by solving a sequence of standard elliptic boundary value problems. Numerical results obtained for a two-dimensional model problem are presented.

  20. Symmetric quantum fully homomorphic encryption with perfect security

    NASA Astrophysics Data System (ADS)

    Liang, Min

    2013-12-01

    Suppose some data have been encrypted, can you compute with the data without decrypting them? This problem has been studied as homomorphic encryption and blind computing. We consider this problem in the context of quantum information processing, and present the definitions of quantum homomorphic encryption (QHE) and quantum fully homomorphic encryption (QFHE). Then, based on quantum one-time pad (QOTP), we construct a symmetric QFHE scheme, where the evaluate algorithm depends on the secret key. This scheme permits any unitary transformation on any -qubit state that has been encrypted. Compared with classical homomorphic encryption, the QFHE scheme has perfect security. Finally, we also construct a QOTP-based symmetric QHE scheme, where the evaluate algorithm is independent of the secret key.

  1. First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems

    DTIC Science & Technology

    2014-03-01

    accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re

  2. Bathymetric Changes Shaped by Longshore Currents on a Natural Beach

    NASA Astrophysics Data System (ADS)

    Reilly, W. L.; Slinn, D.; Plant, N.

    2004-12-01

    The goal of the project is to simulate beach morphology on time scales of hours to days. Our approach is to develop finite difference solutions from a coupled modeling system consisting of existing nearshore circulation, wave, and sediment flux models. We initialize the model with bathymetry from a dense data set north of the pier at the Field Research Facility (FRF) in Duck, NC. We integrate the model system forward in time and compare the results of the hind-cast of the beach evolution with the field observations. The model domain extends 1000 meters in the alongshore direction and 500 meters in the cross-shore direction with 5 meter grid spacing. The bathymetry is interpolated and filtered from CRAB transects. A second-degree exponential smoothing method is used to return the cross-shore beach profile near the edges of the modeled domain back to the mean alongshore profile, because the circulation model implements periodic boundary conditions in the alongshore direction. The offshore wave height and direction are taken from the 8-meter bipod at the FRF and input to the wave-model, SWAN (Spectral Wave Nearshore), with a Gaussian-shaped frequency spectrum and a directional spreading of 5 degrees. A constant depth induced wave breaking parameter of 0.73 is used. The resulting calculated wave induced force per unit surface area (gradient of the radiation stress) output from SWAN is used to drive the currents in the circulation model. The circulation model is based on the free-surface non-linear shallow water equations and uses the fourth order compact scheme to calculate spatial derivatives and a third order Adams-Bashforth time discretization scheme. Free slip, symmetry boundary conditions are applied at both the shoreline and offshore boundaries. The time averaged sediment flux is calculated at each location after one hour of circulation. The sediment flux model is based on the approach of Bagnold and includes approximations for both bed-load and suspended load. The bathymetry is then updated by computing the divergence of the time averaged sediment fluxes. The process is then repeated using the updated bathymetry in both SWAN and the circulation model. The cycle continues for a simulation of 10 hours. The results of bathymetric change vary for different time-dependent wave conditions and initial bathymetric profiles. Typical results indicate that for wave heights on the order of one meter, shoreline advancement and sandbar evolution is observed on the order of tens of centimeters.

  3. Importance biasing scheme implemented in the PRIZMA code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  4. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  5. Parameterization of bulk condensation in numerical cloud models

    NASA Technical Reports Server (NTRS)

    Kogan, Yefim L.; Martin, William J.

    1994-01-01

    The accuracy of the moist saturation adjustment scheme has been evaluated using a three-dimensional explicit microphysical cloud model. It was found that the error in saturation adjustment depends strongly on the Cloud Condensation Nucleii (CCN) concentration in the ambient atmosphere. The scheme provides rather accurate results in the case where a sufficiently large number of CCN (on the order of several hundred per cubic centimeter) is available. However, under conditions typical of marine stratocumulus cloud layers with low CCN concentration, the error in the amounts of condensed water vapor and released latent heat may be as large as 40%-50%. A revision of the saturation adjustment scheme is devised that employs the CCN concentration, dynamical supersaturation, and cloud water content as additional variables in the calculation of the condensation rate. The revised condensation model reduced the error in maximum updraft and cloud water content in the climatically significant case of marine stratocumulus cloud layers by an order of magnitude.

  6. Effect of wetting-layer density of states on the gain and phase recovery dynamics of quantum-dot semiconductor optical amplifiers

    NASA Astrophysics Data System (ADS)

    Kim, Jungho; Yu, Bong-Ahn

    2015-03-01

    We numerically investigate the effect of the wetting-layer (WL) density of states on the gain and phase recovery dynamics of quantum-dot semiconductor optical amplifiers in both electrical and optical pumping schemes by solving 1088 coupled rate equations. The temporal variations of the ultrafast gain and phase recovery responses at the ground state (GS) are calculated as a function of the WL density of states. The ultrafast gain recovery responses do not significantly depend on the WL density of states in the electrical pumping scheme and the three optical pumping schemes such as the optical pumping to the WL, the optical pumping to the excited state ensemble, and the optical pumping to the GS ensemble. The ultrafast phase recovery responses are also not significantly affected by the WL density of states except the optical pumping to the WL, where the phase recovery component caused by the WL becomes slowed down as the WL density of states increases.

  7. Developing Empirical Lightning Cessation Forecast Guidance for the Cape Canaveral Air Force Station and Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Stano, Geoffrey T.; Fuelberg, Henry E.; Roeder, William P.

    2010-01-01

    This research addresses the 45th Weather Squadron's (45WS) need for improved guidance regarding lightning cessation at Cape Canaveral Air Force Station and Kennedy Space Center (KSC). KSC's Lightning Detection and Ranging (LDAR) network was the primary observational tool to investigate both cloud-to-ground and intracloud lightning. Five statistical and empirical schemes were created from LDAR, sounding, and radar parameters derived from 116 storms. Four of the five schemes were unsuitable for operational use since lightning advisories would be canceled prematurely, leading to safety risks to personnel. These include a correlation and regression tree analysis, three variants of multiple linear regression, event time trending, and the time delay between the greatest height of the maximum dBZ value to the last flash. These schemes failed to adequately forecast the maximum interval, the greatest time between any two flashes in the storm. The majority of storms had a maximum interval less than 10 min, which biased the schemes toward small values. Success was achieved with the percentile method (PM) by separating the maximum interval into percentiles for the 100 dependent storms.

  8. Economic evaluation of progeny-testing and genomic selection schemes for small-sized nucleus dairy cattle breeding programs in developing countries.

    PubMed

    Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M

    2017-03-01

    In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  9. Multigrid calculation of three-dimensional turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1989-01-01

    Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.

  10. Efficient calculation of nuclear spin-rotation constants from auxiliary density functional theory.

    PubMed

    Zuniga-Gutierrez, Bernardo; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso; Simon-Bastida, Patricia; Calaminici, Patrizia; Köster, Andreas M

    2015-09-14

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-cluster level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H(12)C-(12)CH-DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.

  11. A New Class of Highly Accurate Differentiation Schemes Based on the Prolate Spheroidal Wave Functions

    DTIC Science & Technology

    2011-04-07

    predictor - corrector scheme. Such an approach for the solution of time-dependent PDEs, which is some- times referred to as the “method of lines,” is studied...particular, λj = i j |λj |. We define the self -adjoint operator Qc : L 2([−1, 1]) → L2([−1, 1]) by the formula Qc(φ) = 1 π ∫ 1 −1 sin( c (x− t)) x− t φ...Gaussian quadratures for bandlimited functions is to use the Newton-type nonlinear optimization algorithm of [14]. Specifically, for bandlimit c and

  12. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  13. Quantum Monte Carlo with very large multideterminant wavefunctions.

    PubMed

    Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel

    2016-07-01

    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants,  Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  15. Renormalization scheme dependence of the two-loop QCD corrections to the neutral Higgs-boson masses in the MSSM.

    PubMed

    Borowka, S; Hahn, T; Heinemeyer, S; Heinrich, G; Hollik, W

    Reaching a theoretical accuracy in the prediction of the lightest MSSM Higgs-boson mass, [Formula: see text], at the level of the current experimental precision requires the inclusion of momentum-dependent contributions at the two-loop level. Recently two groups presented the two-loop QCD momentum-dependent corrections to [Formula: see text] (Borowka et al., Eur Phys J C 74(8):2994, 2014; Degrassi et al., Eur Phys J C 75(2):61, 2015), using a hybrid on-shell-[Formula: see text] scheme, with apparently different results. We show that the differences can be traced back to a different renormalization of the top-quark mass, and that the claim in Ref. Degrassi et al. (Eur Phys J C 75(2):61, 2015) of an inconsistency in Ref. Borowka et al. (Eur Phys J C 74(8):2994, 2014) is incorrect. We furthermore compare consistently the results for [Formula: see text] obtained with the top-quark mass renormalized on-shell and [Formula: see text]. The latter calculation has been added to the FeynHiggs package and can be used to estimate missing higher-order corrections beyond the two-loop level.

  16. Inferring electric fields and currents from ground magnetometer data - A test with theoretically derived inputs

    NASA Technical Reports Server (NTRS)

    Wolf, R. A.; Kamide, Y.

    1983-01-01

    Advanced techniques considered by Kamide et al. (1981) seem to have the potential for providing observation-based high time resolution pictures of the global ionospheric current and electric field patterns for interesting events. However, a reliance on the proposed magnetogram-inversion schemes for the deduction of global ionospheric current and electric field patterns requires proof that reliable results are obtained. 'Theoretical' tests of the accuracy of the magnetogram inversion schemes have, therefore, been considered. The present investigation is concerned with a test, involving the developed KRM algorithm and the Rice Convection Model (RCM). The test was successful in the sense that there was overall agreement between electric fields and currents calculated by the RCM and KRM schemes.

  17. Toward computational models of magma genesis and geochemical transport in subduction zones

    NASA Astrophysics Data System (ADS)

    Katz, R.; Spiegelman, M.

    2003-04-01

    The chemistry of material erupted from subduction-related volcanoes records important information about the processes that lead to its formation at depth in the Earth. Self-consistent numerical simulations provide a useful tool for interpreting this data as they can explore the non-linear feedbacks between processes that control the generation and transport of magma. A model capable of addressing such issues should include three critical components: (1) a variable viscosity solid flow solver with smooth and accurate pressure and velocity fields, (2) a parameterization of mass transfer reactions between the solid and fluid phases and (3) a consistent fluid flow and reactive transport code. We report on progress on each of these parts. To handle variable-viscosity solid-flow in the mantle wedge, we are adapting a Patankar-based FAS multigrid scheme developed by Albers (2000, J. Comp. Phys.). The pressure field in this scheme is the solution to an elliptic equation on a staggered grid. Thus we expect computed pressure fields to have smooth gradient fields suitable for porous flow calculations, unlike those of commonly used penalty-method schemes. Use of a temperature and strain-rate dependent mantle rheology has been shown to have important consequences for the pattern of flow and the temperature structure in the wedge. For computing thermal structure we present a novel scheme that is a hybrid of Crank-Nicholson (CN) and Semi-Lagrangian (SL) methods. We have tested the SLCN scheme on advection across a broad range of Peclet numbers and show the results. This scheme is also useful for low-diffusivity chemical transport. We also describe our parameterization of hydrous mantle melting [Katz et. al., G3, 2002 in review]. This parameterization is designed to capture the melting behavior of peridotite--water systems over parameter ranges relevant to subduction. The parameterization incorporates data and intuition gained from laboratory experiments and thermodynamic calculations yet it remains flexible and computationally efficient. Given accurate solid-flow fields, a parameterization of hydrous melting and a method for calculating thermal structure (enforcing energy conservation), the final step is to integrate these components into a consistent framework for reactive-flow and chemical transport in deformable porous media. We present preliminary results for reactive flow in 2-D static and upwelling columns and discuss possible mechanical and chemical consequences of open system reactive melting with application to arcs.

  18. Development of highly accurate approximate scheme for computing the charge transfer integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pershin, Anton; Szalay, Péter G.

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, itmore » was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.« less

  19. MRI-based treatment planning with pseudo CT generated through atlas registration.

    PubMed

    Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-05-01

    To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.

  20. MRI-based treatment planning with pseudo CT generated through atlas registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho

    2014-05-15

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less

  1. MRI-based treatment planning with pseudo CT generated through atlas registration

    PubMed Central

    Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho

    2014-01-01

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377

  2. Fourier Collocation Approach With Mesh Refinement Method for Simulating Transit-Time Ultrasonic Flowmeters Under Multiphase Flow Conditions.

    PubMed

    Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny

    2018-02-01

    A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.

  3. Numerical Investigation of Two-Phase Flows With Charged Droplets in Electrostatic Field

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1996-01-01

    A numerical method to solve two-phase turbulent flows with charged droplets in an electrostatic field is presented. The ensemble-averaged Navier-Stokes equations and the electrostatic potential equation are solved using a finite volume method. The transitional turbulence field is described using multiple-time-scale turbulence equations. The equations of motion of droplets are solved using a Lagrangian particle tracking scheme, and the inter-phase momentum exchange is described by the Particle-In-Cell scheme. The electrostatic force caused by an applied electrical potential is calculated using the electrostatic field obtained by solving a Laplacian equation and the force exerted by charged droplets is calculated using the Coulombic force equation. The method is applied to solve electro-hydrodynamic sprays. The calculated droplet velocity distributions for droplet dispersions occurring in a stagnant surrounding are in good agreement with the measured data. For droplet dispersions occurring in a two-phase flow, the droplet trajectories are influenced by aerodynamic forces, the Coulombic force, and the applied electrostatic potential field.

  4. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  5. Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.

    1980-01-01

    Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.

  6. Numerical simulation of turbulence in the presence of shear

    NASA Technical Reports Server (NTRS)

    Shaanan, S.; Ferziger, J. H.; Reynolds, W. C.

    1975-01-01

    The numerical calculations are presented of the large eddy structure of turbulent flows, by use of the averaged Navier-Stokes equations, where averages are taken over spatial regions small compared to the size of the computational grid. The subgrid components of motion are modeled by a local eddy-viscosity model. A new finite-difference scheme is proposed to represent the nonlinear average advective term which has fourth-order accuracy. This scheme exhibits several advantages over existing schemes with regard to the following: (1) the scheme is compact as it extends only one point away in each direction from the point to which it is applied; (2) it gives better resolution for high wave-number waves in the solution of Poisson equation, and (3) it reduces programming complexity and computation time. Examples worked out in detail are the decay of isotropic turbulence, homogeneous turbulent shear flow, and homogeneous turbulent shear flow with system rotation.

  7. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  8. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  9. An interlaboratory comparison programme on radio frequency electromagnetic field measurements: the second round of the scheme.

    PubMed

    Nicolopoulou, E P; Ztoupis, I N; Karabetsos, E; Gonos, I F; Stathopulos, I A

    2015-04-01

    The second round of an interlaboratory comparison scheme on radio frequency electromagnetic field measurements has been conducted in order to evaluate the overall performance of laboratories that perform measurements in the vicinity of mobile phone base stations and broadcast antenna facilities. The participants recorded the electric field strength produced by two high frequency signal generators inside an anechoic chamber in three measurement scenarios with the antennas transmitting each time different signals at the FM, VHF, UHF and GSM frequency bands. In each measurement scenario, the participants also used their measurements in order to calculate the relative exposure ratios. The results were evaluated in each test level calculating performance statistics (z-scores and En numbers). Subsequently, possible sources of errors for each participating laboratory were discussed, and the overall evaluation of their performances was determined by using an aggregated performance statistic. A comparison between the two rounds proves the necessity of the scheme. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  11. The determinants of bond angle variability in protein/peptide backbones: A comprehensive statistical/quantum mechanics analysis.

    PubMed

    Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana

    2015-11-01

    The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.

  12. Simulation of the Summer Monsoon Rainfall over East Asia using the NCEP GFS Cumulus Parameterization at Different Horizontal Resolutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho

    2014-10-01

    The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less

  13. Enforcing the Courant–Friedrichs–Lewy condition in explicitly conservative local time stepping schemes

    DOE PAGES

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-01-30

    In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less

  14. A Simple Algebraic Grid Adaptation Scheme with Applications to Two- and Three-dimensional Flow Problems

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.; Lytle, John K.

    1989-01-01

    An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.

  15. Implicit Total Variation Diminishing (TVD) schemes for steady-state calculations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Warming, R. F.; Harten, A.

    1983-01-01

    The application of a new implicit unconditionally stable high resolution total variation diminishing (TVD) scheme to steady state calculations. It is a member of a one parameter family of explicit and implicit second order accurate schemes developed by Harten for the computation of weak solutions of hyperbolic conservation laws. This scheme is guaranteed not to generate spurious oscillations for a nonlinear scalar equation and a constant coefficient system. Numerical experiments show that this scheme not only has a rapid convergence rate, but also generates a highly resolved approximation to the steady state solution. A detailed implementation of the implicit scheme for the one and two dimensional compressible inviscid equations of gas dynamics is presented. Some numerical computations of one and two dimensional fluid flows containing shocks demonstrate the efficiency and accuracy of this new scheme.

  16. Glue Spin and Helicity in the Proton from Lattice QCD.

    PubMed

    Yang, Yi-Bo; Sufian, Raza Sabbir; Alexandru, Andrei; Draper, Terrence; Glatzmaier, Michael J; Liu, Keh-Fei; Zhao, Yong

    2017-03-10

    We report the first lattice QCD calculation of the glue spin in the nucleon. The lattice calculation is carried out with valence overlap fermions on 2+1 flavor domain-wall fermion gauge configurations on four lattice spacings and four volumes including an ensemble with physical values for the quark masses. The glue spin S_{G} in the Coulomb gauge in the modified minimal subtraction (MS[over ¯]) scheme is obtained with one-loop perturbative matching. We find the results fairly insensitive to lattice spacing and quark masses. We also find that the proton momentum dependence of S_{G} in the range 0≤|p[over →]|<1.5  GeV is very mild, and we determine it in the large-momentum limit to be S_{G}=0.251(47)(16) at the physical pion mass in the MS[over ¯] scheme at μ^{2}=10  GeV^{2}. If the matching procedure in large-momentum effective theory is neglected, S_{G} is equal to the glue helicity measured in high-energy scattering experiments.

  17. Full-scale computation for all the thermoelectric property parameters of half-Heusler compounds

    DOE PAGES

    Hong, A. J.; Li, L.; He, R.; ...

    2016-03-07

    The thermoelectric performance of materials relies substantially on the band structures that determine the electronic and phononic transports, while the transport behaviors compete and counter-act for the power factor PF and figure-of-merit ZT. These issues make a full-scale computation of the whole set of thermoelectric parameters particularly attractive, while a calculation scheme of the electronic and phononic contributions to thermal conductivity remains yet challenging. In this work, we present a full-scale computation scheme based on the first-principles calculations by choosing a set of doped half- Heusler compounds as examples for illustration. The electronic structure is computed using the WIEN2k codemore » and the carrier relaxation times for electrons and holes are calculated using the Bardeen and Shockley’s deformation potential (DP) theory. The finite-temperature electronic transport is evaluated within the framework of Boltzmann transport theory. In sequence, the density functional perturbation combined with the quasi-harmonic approximation and the Klemens’ equation is implemented for calculating the lattice thermal conductivity of carrier-doped thermoelectric materials such as Tidoped NbFeSb compounds without losing a generality. The calculated results show good agreement with experimental data. Lastly, the present methodology represents an effective and powerful approach to calculate the whole set of thermoelectric properties for thermoelectric materials.« less

  18. Full-scale computation for all the thermoelectric property parameters of half-Heusler compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, A. J.; Li, L.; He, R.

    The thermoelectric performance of materials relies substantially on the band structures that determine the electronic and phononic transports, while the transport behaviors compete and counter-act for the power factor PF and figure-of-merit ZT. These issues make a full-scale computation of the whole set of thermoelectric parameters particularly attractive, while a calculation scheme of the electronic and phononic contributions to thermal conductivity remains yet challenging. In this work, we present a full-scale computation scheme based on the first-principles calculations by choosing a set of doped half- Heusler compounds as examples for illustration. The electronic structure is computed using the WIEN2k codemore » and the carrier relaxation times for electrons and holes are calculated using the Bardeen and Shockley’s deformation potential (DP) theory. The finite-temperature electronic transport is evaluated within the framework of Boltzmann transport theory. In sequence, the density functional perturbation combined with the quasi-harmonic approximation and the Klemens’ equation is implemented for calculating the lattice thermal conductivity of carrier-doped thermoelectric materials such as Tidoped NbFeSb compounds without losing a generality. The calculated results show good agreement with experimental data. Lastly, the present methodology represents an effective and powerful approach to calculate the whole set of thermoelectric properties for thermoelectric materials.« less

  19. MO-FG-202-08: Real-Time Monte Carlo-Based Treatment Dose Reconstruction and Monitoring for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Shi, F; Gu, X

    2016-06-15

    Purpose: This proof-of-concept study is to develop a real-time Monte Carlo (MC) based treatment-dose reconstruction and monitoring system for radiotherapy, especially for the treatments with complicated delivery, to catch treatment delivery errors at the earliest possible opportunity and interrupt the treatment only when an unacceptable dosimetric deviation from our expectation occurs. Methods: First an offline scheme is launched to pre-calculate the expected dose from the treatment plan, used as ground truth for real-time monitoring later. Then an online scheme with three concurrent threads is launched while treatment delivering, to reconstruct and monitor the patient dose in a temporally resolved fashionmore » in real-time. Thread T1 acquires machine status every 20 ms to calculate and accumulate fluence map (FM). Once our accumulation threshold is reached, T1 transfers the FM to T2 for dose reconstruction ad starts to accumulate a new FM. A GPU-based MC dose calculation is performed on T2 when MC dose engine is ready and a new FM is available. The reconstructed instantaneous dose is directed to T3 for dose accumulation and real-time visualization. Multiple dose metrics (e.g. maximum and mean dose for targets and organs) are calculated from the current accumulated dose and compared with the pre-calculated expected values. Once the discrepancies go beyond our tolerance, an error message will be send to interrupt the treatment delivery. Results: A VMAT Head-and-neck patient case was used to test the performance of our system. Real-time machine status acquisition was simulated here. The differences between the actual dose metrics and the expected ones were 0.06%–0.36%, indicating an accurate delivery. ∼10Hz frequency of dose reconstruction and monitoring was achieved, with 287.94s online computation time compared to 287.84s treatment delivery time. Conclusion: Our study has demonstrated the feasibility of computing a dose distribution in a temporally resolved fashion in real-time and quantitatively and dosimetrically monitoring the treatment delivery.« less

  20. Multi-step Monte Carlo calculations applied to nuclear reactor instrumentation - source definition and renormalization to physical values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radulovic, Vladimir; Barbot, Loic; Fourmentel, Damien

    Significant efforts have been made over the last few years in the French Alternative Energies and Atomic Energy Commission (CEA) to adopt multi-step Monte Carlo calculation schemes in the investigation and interpretation of the response of nuclear reactor instrumentation detectors (e.g. miniature ionization chambers - MICs and self-powered neutron or gamma detectors - SPNDs and SPGDs). The first step consists of the calculation of the primary data, i.e. evaluation of the neutron and gamma flux levels and spectra in the environment where the detector is located, using a computational model of the complete nuclear reactor core and its surroundings. Thesemore » data are subsequently used to define sources for the following calculation steps, in which only a model of the detector under investigation is used. This approach enables calculations with satisfactory statistical uncertainties (of the order of a few %) within regions which are very small in size (the typical volume of which is of the order of 1 mm{sup 3}). The main drawback of a calculation scheme as described above is that perturbation effects on the radiation conditions caused by the detectors themselves are not taken into account. Depending on the detector, the nuclear reactor and the irradiation position, the perturbation in the neutron flux as primary data may reach 10 to 20%. A further issue is whether the model used in the second step calculations yields physically representative results. This is generally not the case, as significant deviations may arise, depending on the source definition. In particular, as presented in the paper, the injudicious use of special options aimed at increasing the computation efficiency (e.g. reflective boundary conditions) may introduce unphysical bias in the calculated flux levels and distortions in the spectral shapes. This paper presents examples of the issues described above related to a case study on the interpretation of the signal from different types of SPNDs, which were recently irradiated in the Jozef Stefan Institute TRIGA Mark II reactor in Ljubljana, Slovenia, and provides recommendations on how they can be overcome. The paper concludes with a discussion on the renormalization of the results from the second step calculations, to obtain accurate physical values. (authors)« less

  1. A class of the van Leer-type transport schemes and its application to the moisture transport in a general circulation model

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Chao, Winston C.; Sud, Y. C.; Walker, G. K.

    1994-01-01

    A generalized form of the second-order van Leer transport scheme is derived. Several constraints to the implied subgrid linear distribution are discussed. A very simple positive-definite scheme can be derived directly from the generalized form. A monotonic version of the scheme is applied to the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) for the moisture transport calculations, replacing the original fourth-order center-differencing scheme. Comparisons with the original scheme are made in idealized tests as well as in a summer climate simulation using the full GLA GCM. A distinct advantage of the monotonic transport scheme is its ability to transport sharp gradients without producing spurious oscillations and unphysical negative mixing ratio. Within the context of low-resolution climate simulations, the aforementioned characteristics are demonstrated to be very beneficial in regions where cumulus convection is active. The model-produced precipitation pattern using the new transport scheme is more coherently organized both in time and in space, and correlates better with observations. The side effect of the filling algorithm used in conjunction with the original scheme is also discussed, in the context of idealized tests. The major weakness of the proposed transport scheme with a local monotonic constraint is its substantial implicit diffusion at low resolution. Alternative constraints are discussed to counter this problem.

  2. Coriolis-coupled wave packet dynamics of H + HLi reaction.

    PubMed

    Padmanaban, R; Mahapatra, S

    2006-05-11

    We investigated the effect of Coriolis coupling (CC) on the initial state-selected dynamics of H+HLi reaction by a time-dependent wave packet (WP) approach. Exact quantum scattering calculations were obtained by a WP propagation method based on the Chebyshev polynomial scheme and ab initio potential energy surface of the reacting system. Partial wave contributions up to the total angular momentum J=30 were found to be necessary for the scattering of HLi in its vibrational and rotational ground state up to a collision energy approximately 0.75 eV. For each J value, the projection quantum number K was varied from 0 to min (J, K(max)), with K(max)=8 until J=20 and K(max)=4 for further higher J values. This is because further higher values of K do not have much effect on the dynamics and also because one wishes to maintain the large computational overhead for each calculation within the affordable limit. The initial state-selected integral reaction cross sections and thermal rate constants were calculated by summing up the contributions from all partial waves. These were compared with our previous results on the title system, obtained within the centrifugal sudden and J-shifting approximations, to demonstrate the impact of CC on the dynamics of this system.

  3. The finite element method scheme for a solution of an evolution variational inequality with a nonlocal space operator

    NASA Astrophysics Data System (ADS)

    Glazyrina, O. V.; Pavlova, M. F.

    2016-11-01

    We consider the parabolic inequality with monotone with respect to a gradient space operator, which is depended on integral with respect to space variables solution characteristic. We construct a two-layer differential scheme for this problem with use of penalty method, semidiscretization with respect to time variable method and the finite element method (FEM) with respect to space variables. We proved a convergence of constructed mothod.

  4. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  5. Proxy-SU(3) symmetry in heavy deformed nuclei

    NASA Astrophysics Data System (ADS)

    Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.

    2017-06-01

    Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.

  6. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  7. Alternative process schemes for coal conversion. Progress report No. 1, October 1, 1978--January 31, 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sansone, M.J.

    1979-02-01

    On the basis of simple, first approximation calculations, it has been shown that catalytic gasification and hydrogasification are inherently superior to conventional gasification with respect to carbon utilization and thermal efficiency. However, most processes which are directed toward the production of substitute natural gas (SNG) by direct combination of coal with steam at low temperatures (catalytic processes) or with hydrogen (hydrogasification) will require a step for separation of product SNG from a recycle stream. The success or falure of the process could well depend upon the economics of this separation scheme. The energetics for the separation of mixtures of idealmore » gases has been considered in some detail. Minimum energies for complete separation of representative effluent mixtures have been calculated as well as energies for separation into product and recycle streams. The gas mixtures include binary systems of H/sub 2/ and CH/sub 4/ and ternary mixtures of H/sub 2/, CH/sub 4/, and CO. A brief summary of a number of different real separation schemes has also been included. We have arbitrarily divided these into five categories: liquefaction, absorption, adsorption, chemical, and diffusional methods. These separation methods will be screened and the more promising methods examined in more detail in later reports. Finally, a brief mention of alternative coal conversion processes concludes this report.« less

  8. Systematic comparison of jet energy-loss schemes in a realistic hydrodynamic medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, Steffen A.; Majumder, Abhijit; Gale, Charles

    2009-02-15

    We perform a systematic comparison of three different jet energy-loss approaches. These include the Armesto-Salgado-Wiedemann scheme based on the approach of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z/ASW), the higher twist (HT) approach and a scheme based on the Arnold-Moore-Yaffe (AMY) approach. In this comparison, an identical medium evolution will be utilized for all three approaches: this entails not only the use of the same realistic three-dimensional relativistic fluid dynamics (RFD) simulation, but also the use of identical initial parton-distribution functions and final fragmentation functions. We are, thus, in a unique position to not only isolate fundamental differences between the various approaches butmore » also make rigorous calculations for different experimental measurements using state of the art components. All three approaches are reduced to versions containing only one free tunable parameter, this is then related to the well-known transport parameter q. We find that the parameters of all three calculations can be adjusted to provide a good description of inclusive data on R{sub AA} vs transverse momentum. However, we do observe slight differences in their predictions for the centrality and azimuthal angular dependence of R{sub AA} vs p{sub T}. We also note that the values of the transport coefficient q in the three approaches to describe the data differ significantly.« less

  9. Accelerating NBODY6 with graphics processing units

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Aarseth, Sverre J.

    2012-07-01

    We describe the use of graphics processing units (GPUs) for speeding up the code NBODY6 which is widely used for direct N-body simulations. Over the years, the N2 nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 per cent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present NBODY6-GPU code is well balanced for simulations in the particle range 104-2 × 105 for a dual-GPU system attached to a standard PC.

  10. Computational flow field in energy efficient engine (EEE)

    NASA Astrophysics Data System (ADS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2016-11-01

    In this paper, preliminary results for the recently-updated Open National Combustor Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the Eusing different ways to introduce the fuel injection. Supported by NASA's Transformational Tools and Technologies project.

  11. Computational Flow Field in Energy Efficient Engine (EEE)

    NASA Technical Reports Server (NTRS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2016-01-01

    In this paper, preliminary results for the recently-updated Open National Combustion Code (Open NCC) as applied to the EEE are presented. The comparison between two different numerical schemes, the standard Jameson-Schmidt-Turkel (JST) scheme and the advection upstream splitting method (AUSM), is performed for the cold flow and the reacting flow calculations using the RANS. In the cold flow calculation, the AUSM scheme predicts a much stronger reverse flow in the central recirculation zone. In the reacting flow calculation, we test two cases: gaseous fuel injection and liquid spray injection. In the gaseous fuel injection case, the overall flame structures of the two schemes are similar to one another, in the sense that the flame is attached to the main nozzle, but is detached from the pilot nozzle. However, in the exit temperature profile, the AUSM scheme shows a more uniform profile than that of the JST scheme, which is close to the experimental data. In the liquid spray injection case, we expect different flame structures in this scenario. We will give a brief discussion on how two numerical schemes predict the flame structures inside the EEE using different ways to introduce the fuel injection.

  12. A fast numerical scheme for causal relativistic hydrodynamics with dissipation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro

    2011-08-01

    Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less

  13. Current Collection in a Magnetic Field

    NASA Technical Reports Server (NTRS)

    Krivorutsky, E. N.

    1997-01-01

    It is found that the upper-bound limit for current collection in the case of strong magnetic field from the current is close to that given by the Parker-Murphy formula. This conclusion is consistent with the results obtained in laboratory experiments. This limit weakly depends on the shape of the wire. The adiabatic limit in this case will be easily surpassed due to strong magnetic field gradients near the separatrix. The calculations can be done using the kinetic equation in the drift approximation. Analytical results are obtained for the region where the Earth's magnetic field is dominant. The current collection can be calculated (neglecting scattering) using a particle simulation code. Dr. Singh has agreed to collaborate, allowing the use of his particle code. The code can be adapted for the case when the current magnetic field is strong. The needed dm for these modifications is 3-4 months. The analytical description and essential part of the program is prepared for the calculation of the current in the region where the adiabatic description can be used. This was completed with the collaboration of Drs. Khazanov and Liemohn. A scheme of measuring the end body position is also proposed. The scheme was discussed in the laboratory (with Dr. Stone) and it was concluded that it can be proposed for engineering analysis.

  14. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  15. Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honda, M.; Kajita, T.; Kasahara, K.

    2007-02-15

    Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using othermore » interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied.« less

  16. An Extended Eddy-Diffusivity Mass-Flux Scheme for Unified Representation of Subgrid-Scale Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.

    2018-03-01

    Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.

  17. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    PubMed

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.

  18. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  19. Dual-stage periodic event-triggered output-feedback control for linear systems.

    PubMed

    Ruan, Zhen; Chen, Wu-Hua; Lu, Xiaomei

    2018-05-01

    This paper proposes an event-triggered control framework, called dual-stage periodic event-triggered control (DSPETC), which unifies periodic event-triggered control (PETC) and switching event-triggered control (SETC). Specifically, two period parameters h 1 and h 2 are introduced to characterize the new event-triggering rule, where h 1 denotes the sampling period, while h 2 denotes the monitoring period. By choosing some specified values of h 2 , the proposed control scheme can reduce to PETC or SETC scheme. In the DSPETC framework, the controlled system is represented as a switched system model and its stability is analyzed via a switching-time-dependent Lyapunov functional. Both the cases with/without network-induced delays are investigated. Simulation and experimental results show that the DSPETC scheme is superior to the PETC scheme and the SETC scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Finite Difference Time Marching in the Frequency Domain: A Parabolic Formulation for the Convective Wave Equation

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Kreider, K. L.

    1996-01-01

    An explicit finite difference iteration scheme is developed to study harmonic sound propagation in ducts. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

Top