NASA Astrophysics Data System (ADS)
Hirthe, Eugenia M.; Graf, Thomas
2012-12-01
The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.
Adaptive time stepping in biomolecular dynamics.
Franklin, J; Doniach, S
2005-09-22
We present an adaptive time stepping scheme based on the extrapolative method of Barth and Schlick [LN, J. Chem. Phys. 109, 1633 (1998)] to numerically integrate the Langevin equation with a molecular-dynamics potential. This approach allows us to use (on average) a time step for the strong nonbonded force integration corresponding to half the period of the fastest bond oscillation, without compromising the slow degrees of freedom in the problem. We show with simple examples how the dynamic step size stabilizes integration operators, and discuss some of the limitations of such stability. The method introduced uses a slightly more accurate inner integrator than LN to accommodate the larger steps. The adaptive time step approach reproduces temporal features of the bovine pancreatic trypsin inhibitor (BPTI) test system (similar to the one used in the original introduction of LN) compared to short-time integrators, but with energies that are shifted with respect to both LN, and traditional stochastic versions of Verlet. Although the introduction of longer steps has the effect of systematically heating the bonded components of the potential, the temporal fluctuations of the slow degrees of freedom are reproduced accurately. The purpose of this paper is to display a mechanism by which the resonance traditionally associated with using time steps corresponding to half the period of oscillations in molecular dynamics can be avoided. This has theoretical utility in terms of designing numerical integration schemes--the key point is that by factoring a propagator so that time steps are not constant one can recover stability with an overall (average) time step at a resonance frequency. There are, of course, limitations to this approach associated with the complicated, nonlinear nature of the molecular-dynamics (MD) potential (i.e., it is not as straightforward as the linear test problem we use to motivate the method). While the basic notion remains in the full Newtonian problem
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
An adaptive time-stepping strategy for solving the phase field crystal model
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
Block Time Step Storage Scheme for Astrophysical N-body Simulations
NASA Astrophysics Data System (ADS)
Cai, Maxwell Xu; Meiron, Yohai; Kouwenhoven, M. B. N.; Assmann, Paulina; Spurzem, Rainer
2015-08-01
Astrophysical research in recent decades has made significant progress thanks to the availability of various N-body simulation techniques. With the rapid development of high-performance computing technologies, modern simulations have been able to use the computing power of massively parallel clusters with more than 105 GPU cores. While unprecedented accuracy and dynamical scales have been achieved, the enormous amount of data being generated continuously poses great challenges for the subsequent procedures of data analysis and archiving. In this paper, we propose an adaptive storage scheme for simulation data, inspired by the block time step (BTS) integration scheme found in a number of direct N-body integrators available nowadays, as an urgent response to these challenges. The proposed scheme, namely, the BTS storage scheme, works by minimizing the data redundancy by assigning individual output frequencies to the data as required by the researcher. As demonstrated by benchmarks, the proposed scheme is applicable to a wide variety of simulations. Despite the main focus of developing a solution for direct N-body simulation data, the methodology is transferable for grid-based or tree-based simulations where hierarchical time stepping is used.
NASA Astrophysics Data System (ADS)
Commerçon, B.; Debout, V.; Teyssier, R.
2014-03-01
Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
Explicit large time-step schemes for the shallow water equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Zwas, G.
1979-01-01
Modifications to explicit finite difference schemes for solving the shallow water equations for meteorological applications by increasing the time step for the fast gravity waves are analyzed. Terms associated with the gravity waves in the shallow water equations are treated on a coarser grid than those associated with the slow Rossby waves, which contain much more of the available energy and must be treated with higher accuracy, enabling a several-fold increase in time step without degrading the accuracy of the solution. The method is presented in Cartesian and spherical coordinates for a rotating earth, using generalized leapfrog, frozen coefficient, and Fourier filtering finite difference schemes. Computational results verify the numerical stability of the approach.
NASA Technical Reports Server (NTRS)
Mohan, Ram V.; Tamma, Kumar K.
1993-01-01
An adaptive time stepping strategy for transient thermal analysis of engineering systems is described which computes the time step based on the local truncation error with a good global error control and obtains optimal time steps to be used during the analysis. Combined mesh partitionings involving FEM/FVM meshes based on physical situations to obtain numerically improved physical representations are also proposed. Numerical test cases are described and comparative pros and cons are identified for practical situations.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
Borg-Graham, L J
2000-01-01
Various improvements are described for the simulation of biophysically and anatomically detailed compartmental models of single neurons and networks of neurons. These include adaptive time-step integration and a reordering of the circuit matrix to allow ideal voltage clamp of arbitrary nodes. We demonstrate how the adaptive time-step method can give equivalent accuracy as a fixed time-step method for typical current clamp simulation protocols, with about a 2.5 reduction in runtime. The ideal voltage clamp method is shown to be more stable than the nonideal case, in particular when used with the adaptive time-step method. Simulation results are presented using the Surf-Hippo Neuron Simulation System, a public domain object-oriented simulator written in Lisp. PMID:10809013
An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers
Gelb, Anne; Archibald, Richard K
2015-01-01
Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.
NASA Astrophysics Data System (ADS)
Gupta, Shubhangi; Wohlmuth, Barbara; Helmig, Rainer
2016-05-01
We present an extrapolation-based semi-implicit multi-rate time stepping (MRT) scheme and a compound-fast MRT scheme for a naturally partitioned, multi-time-scale hydro-geomechanical hydrate reservoir model. We evaluate the performance of the two MRT methods compared to an iteratively coupled solution scheme and discuss their advantages and disadvantages. The performance of the two MRT methods is evaluated in terms of speed-up and accuracy by comparison to an iteratively coupled solution scheme. We observe that the extrapolation-based semi-implicit method gives a higher speed-up but is strongly dependent on the relative time scales of the latent (slow) and active (fast) components. On the other hand, the compound-fast method is more robust and less sensitive to the relative time scales, but gives lower speed up as compared to the semi-implicit method, especially when the relative time scales of the active and latent components are comparable.
Multi time-step wavefront reconstruction for tomographic adaptive-optics systems.
Ono, Yoshito H; Akiyama, Masayuki; Oya, Shin; Lardiére, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin
2016-04-01
In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5-1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2 ms^{-1}. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2-1.5, which is consistent with a prediction from the end-to-end numerical simulation. PMID:27140785
NASA Astrophysics Data System (ADS)
Shi, Fengyan; Kirby, James T.; Harris, Jeffrey C.; Geiman, Joseph D.; Grilli, Stephan T.
We present a high-order adaptive time-stepping TVD solver for the fully nonlinear Boussinesq model of Chen (2006), extended to include moving reference level as in Kennedy et al. (2001). The equations are reorganized in order to facilitate high-order Runge-Kutta time-stepping and a TVD type scheme with a Riemann solver. Wave breaking is modeled by locally switching to the nonlinear shallow water equations when the Froude number exceeds a certain threshold. The moving shoreline boundary condition is implemented using the wetting-drying algorithm with the adjusted wave speed of the Riemann solver. The code is parallelized using the Message Passing Interface (MPI) with non-blocking communication. Model validations show good performance in modeling wave shoaling, breaking, wave runup and wave-averaged nearshore circulation.
NASA Astrophysics Data System (ADS)
Shi, F.; Kirby, J. T.; Tehranirad, B.
2010-12-01
Recent progress in the development of Boussinesq-type wave models using TVD-MUSCL schemes have shown robust performance of the shock-capturing method in simulating breaking waves and coastal inundation (Tonelli and Petti, 2009, Roeber et al., 2010, Shiach and Mingham, 2009, Erduran et al., 2005, and others). Shock-capturing schemes make the treatment of wave breaking straightforward without an artificial viscosity adopted in some breaking wave models such as in Kennedy et al. (2000). The schemes are also able to capture the sharp wave front occurring in the swash zone. A high-order temporal scheme usually requires uniform time-stepping, decreasing model efficiency in applications to breaking waves and inundation where super-critical fluid conditions limit the time step associated with the CFL-criterion. In this presentation, we describe the use of a higher order, adaptive time-stepping algorithm using the Runge-Kutta method in a fully nonlinear Boussinesq wave model. Higher-order numerical schemes in both space and time were applied in order to avoid contamination of the physical dispersive terms in Boussinesq equations resulting from truncation errors in the lower-order (second-order) approximation. The spatial derivatives are discritized using a combination of finite-volume and finite-difference methods. A fourth-order MUSCL reconstruction technique is used in the Riemann solver. The model code is parallelized for the MPI computational environment. We illustrate the model's application to the problems of wave runup and coastal inundation in the context of a standard suite of benchmark tests.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable
An implicit time-stepping scheme for rigid body dynamics with Coulomb friction
STEWART,DAVID; TRINKLE,JEFFREY C.
2000-02-15
In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.
Simulating diffusion processes in discontinuous media: A numerical scheme with constant time steps
Lejay, Antoine; Pichot, Geraldine
2012-08-30
In this article, we propose new Monte Carlo techniques for moving a diffusive particle in a discontinuous media. In this framework, we characterize the stochastic process that governs the positions of the particle. The key tool is the reduction of the process to a Skew Brownian motion (SBM). In a zone where the coefficients are locally constant on each side of the discontinuity, the new position of the particle after a constant time step is sampled from the exact distribution of the SBM process at the considered time. To do so, we propose two different but equivalent algorithms: a two-steps simulation with a stop at the discontinuity and a one-step direct simulation of the SBM dynamic. Some benchmark tests illustrate their effectiveness.
An Efficient Time-Stepping Scheme for Ab Initio Molecular Dynamics Simulations
NASA Astrophysics Data System (ADS)
Tsuchida, Eiji
2016-08-01
In ab initio molecular dynamics simulations of real-world problems, the simple Verlet method is still widely used for integrating the equations of motion, while more efficient algorithms are routinely used in classical molecular dynamics. We show that if the Verlet method is used in conjunction with pre- and postprocessing, the accuracy of the time integration is significantly improved with only a small computational overhead. We also propose several extensions of the algorithm required for use in ab initio molecular dynamics. The validity of the processed Verlet method is demonstrated in several examples including ab initio molecular dynamics simulations of liquid water. The structural properties obtained from the processed Verlet method are found to be sufficiently accurate even for large time steps close to the stability limit. This approach results in a 2× performance gain over the standard Verlet method for a given accuracy. We also show how to generate a canonical ensemble within this approach.
Runge-Kutta time-stepping schemes with TVD central differencing for the water hammer equations
NASA Astrophysics Data System (ADS)
Wahba, E. M.
2006-10-01
In the present study, Runge-Kutta schemes are used to simulate unsteady flow in elastic pipes due to sudden valve closure. The spatial derivatives are discretized using a central difference scheme. Second-order dissipative terms are added in regions of high gradients while they are switched off in smooth flow regions using a total variation diminishing (TVD) switch. The method is applied to both one- and two-dimensional water hammer formulations. Both laminar and turbulent flow cases are simulated. Different turbulence models are tested including the Baldwin-Lomax and Cebeci-Smith models. The results of the present method are in good agreement with analytical results and with experimental data available in the literature. The two-dimensional model is shown to predict more accurately the frictional damping of the pressure transient. Moreover, through order of magnitude and dimensional analysis, a non-dimensional parameter is identified that controls the damping of pressure transients in elastic pipes.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)
Gavrea, B. I.; Anitescu, M.; Potra, F. A.; Mathematics and Computer Science; Univ. of Pennsylvania; Univ. of Maryland
2008-01-01
In this work we present a framework for the convergence analysis in a measure differential inclusion sense of a class of time-stepping schemes for multibody dynamics with contacts, joints, and friction. This class of methods solves one linear complementarity problem per step and contains the semi-implicit Euler method, as well as trapezoidal-like methods for which second-order convergence was recently proved under certain conditions. By using the concept of a reduced friction cone, the analysis includes, for the first time, a convergence result for the case that includes joints. An unexpected intermediary result is that we are able to define a discrete velocity function of bounded variation, although the natural discrete velocity function produced by our algorithm may have unbounded variation.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
NASA Astrophysics Data System (ADS)
Tavakoli, Rouhollah
2016-01-01
An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.
Toggweiler, Matthias; Adelmann, Andreas; Arbenz, Peter; Yang, Jianjun
2014-09-15
We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework. The idea is to adjust the frequency of costly self-field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. Building on recent work, a more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a clearer analysis by introducing separate time steps for external field and self-field integration, turned out to be useful by its own, for a large class of problems.
Empirical versus time stepping with embedded error control for density-driven flow in porous media
NASA Astrophysics Data System (ADS)
Younes, Anis; Ackerer, Philippe
2010-08-01
Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.
An adaptive Cartesian control scheme for manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.
Crowder, D W; Onstad, D W
2005-04-01
We expanded a simulation model of the population dynamics and genetics of the western corn rootworm for a landscape of corn, soybean, and other crops to study the simultaneous development of resistance to both crop rotation and transgenic corn. Transgenic corn effective against corn rootworm was recently approved in 2003 and may be a very effective new technology for control of western corn rootworm in areas with or without the rotation-resistant variant. In simulations of areas with rotation-resistant populations, planting transgenic corn to only rotated cornfields was a robust strategy to prevent resistance to both traits. In these areas, planting transgenic corn to only continuous fields was not an effective strategy for preventing adaptation to crop rotation or transgenic corn. In areas without rotation-resistant phenotypes, gene expression of the allele for resistance to transgenic corn was the most important factor affecting the development of resistance to transgenic corn. If the allele for resistance to transgenic corn is recessive, resistance can be delayed longer than 15 yr, but if the resistant allele is dominant then resistance usually developed within 15 yr. In a sensitivity analysis, among the parameters investigated, initial allele frequency and density dependence were the two most important factors affecting the evolution of resistance. We compared the results of this simulation model with a more complicated model and results between the two were similar. This indicates that results from a simpler model with a generational time-step can compare favorably with a more complex model with a daily time-step.
NASA Astrophysics Data System (ADS)
Herrendoerfer, R.; van Dinther, Y.; Gerya, T.
2015-12-01
To explore the relationships between subduction dynamics and the megathrust earthquake potential, we have recently developed a numerical model that bridges the gap between processes on geodynamic and earthquake cycle time scales. In a self-consistent, continuum-based framework including a visco-elasto-plastic constitutive relationship, cycles of megathrust earthquake-like ruptures were simulated through a purely slip rate-dependent friction, albeit with very low slip rates (van Dinther et al., JGR, 2013). In addition to much faster earthquakes, a range of aseismic slip processes operate at different time scales in nature. These aseismic processes likely accommodate a considerable amount of the plate convergence and are thus relevant in order to estimate the long-term seismic coupling and related hazard in subduction zones. To simulate and resolve this wide spectrum of slip processes, we innovatively implemented rate-and state dependent friction (RSF) and an adaptive time-stepping into our continuum framework. The RSF formulation, in contrast to our previous friction formulation, takes the dependency of frictional strength on a state variable into account. It thereby allows for continuous plastic yielding inside rate-weakening regions, which leads to aseismic slip. In contrast to the conventional RSF formulation, we relate slip velocities to strain rates and use an invariant formulation. Thus we do not require the a priori definition of infinitely thin, planar faults in a homogeneous elastic medium. With this new implementation of RSF, we succeed to produce consistent cycles of frictional instabilities. By changing the frictional parameter a, b, and the characteristic slip distance, we observe a transition from stable sliding to stick-slip behaviour. This transition is in general agreement with predictions from theoretical estimates of the nucleation size, thereby to first order validating our implementation. By incorporating adaptive time-stepping based on a
New communication schemes based on adaptive synchronization
NASA Astrophysics Data System (ADS)
Yu, Wenwu; Cao, Jinde; Wong, Kwok-Wo; Lü, Jinhu
2007-09-01
In this paper, adaptive synchronization with unknown parameters is discussed for a unified chaotic system by using the Lyapunov method and the adaptive control approach. Some communication schemes, including chaotic masking, chaotic modulation, and chaotic shift key strategies, are then proposed based on the modified adaptive method. The transmitted signal is masked by chaotic signal or modulated into the system, which effectively blurs the constructed return map and can resist this return map attack. The driving system with unknown parameters and functions is almost completely unknown to the attackers, so it is more secure to apply this method into the communication. Finally, some simulation examples based on the proposed communication schemes and some cryptanalysis works are also given to verify the theoretical analysis in this paper.
Automatic Time Stepping with Global Error Control for Groundwater Flow Models
Tang, Guoping
2008-09-01
An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux
Crowder, D W; Onstad, D W; Cray, M E; Pierce, C M F; Hager, A G; Ratcliffe, S T; Steffey, K L
2005-04-01
Western corn rootworm, Diabrotica virgifera virgifera LeConte, has overcome crop rotation in several areas of the north central United States. The effectiveness of crop rotation for management of corn rootworm has begun to fail in many areas of the midwestern United States, thus new management strategies need to be developed to control rotation-resistant populations. Transgenic corn, Zea mays L., effective against western corn rootworm, may be the most effective new technology for control of this pest in areas with or without populations adapted to crop rotation. We expanded a simulation model of the population dynamics and genetics of the western corn rootworm for a landscape of corn; soybean, Glycine max (L.); and other crops to study the simultaneous development of resistance to both crop rotation and transgenic corn. Results indicate that planting transgenic corn to first-year cornfields is a robust strategy to prevent resistance to both crop rotation and transgenic corn in areas where rotation-resistant populations are currently a problem or may be a problem in the future. In these areas, planting transgenic corn only in continuous cornfields is not an effective strategy to prevent resistance to either trait. In areas without rotation-resistant populations, gene expression of the allele for resistance to transgenic corn, R, is the most important factor affecting the evolution of resistance. If R is recessive, resistance can be delayed longer than 15 yr. If R is dominant, resistance may be difficult to prevent. In a sensitivity analysis, results indicate that density dependence, rotational level in the landscape, and initial allele frequency are the three most important factors affecting the results.
An adaptive control scheme for coordinated multimanipulator systems
Jonghann Jean; Lichen Fu . Dept. of Electrical Engineering)
1993-04-01
The problem of adaptive coordinated control of multiple robot arms transporting an object is addressed. A stable adaptive control scheme for both trajectory tracking and internal force control is presented. Detailed analyses on tracking properties of the object position, velocity and the internal forces exerted on the object are given. It is shown that this control scheme can achieve satisfactory tracking performance without using the measurement of contact forces and their derivatives. It can be shown that this scheme can be realized by decentralized implementation to reduce the computational burden. Moreover, some efficient adaptive control strategies can be incorporated to reduce the computational complexity.
Adaptable Iterative and Recursive Kalman Filter Schemes
NASA Technical Reports Server (NTRS)
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
A discrete-time adaptive control scheme for robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
Extrapolated implicit-explicit time stepping.
Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.
2010-01-01
This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.
Adaptive IMEX schemes for high-order unstructured methods
NASA Astrophysics Data System (ADS)
Vermeire, Brian C.; Nadarajah, Siva
2015-01-01
We present an adaptive implicit-explicit (IMEX) method for use with high-order unstructured schemes. The proposed method makes use of the Gerschgorin theorem to conservatively estimate the influence of each individual degree of freedom on the spectral radius of the discretization. This information is used to split the system into implicit and explicit regions, adapting to unsteady features in the flow. We dynamically repartition the domain to balance the number of implicit and explicit elements per core. As a consequence, we are able to achieve an even load balance for each implicit/explicit stage of the IMEX scheme. We investigate linear advection-diffusion, isentropic vortex advection, unsteady laminar flow over an SD7003 airfoil, and turbulent flow over a circular cylinder. Results show that the proposed method consistently yields a stable discretization, and maintains the theoretical order of accuracy of the high-order spatial schemes.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery.
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-01-01
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-01-01
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery.
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-08-23
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way.
Adaptive PCA based fault diagnosis scheme in imperial smelting process.
Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin
2014-09-01
In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.
Margul, Daniel T; Tuckerman, Mark E
2016-05-10
Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on
Towards Adaptive High-Resolution Images Retrieval Schemes
NASA Astrophysics Data System (ADS)
Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.
2016-06-01
Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
Adaptive spatially dependent weighting scheme for tomosynthesis reconstruction
NASA Astrophysics Data System (ADS)
Levakhina, Yulia; Duschka, Robert; Vogt, Florian; Barkhausen, JOErg; Buzug, Thorsten M.
2012-03-01
Digital Tomosynthesis (DT) is an x-ray limited-angle imaging technique. An accurate image reconstruction in tomosynthesis is a challenging task due to the violation of the tomographic sufficiency conditions. A classical "shift-and-add" algorithm (or simple backprojection) suffers from blurring artifacts, produced by structures located above and below the plane of interest. The artifact problem becomes even more prominent in the presence of materials and tissues with a high x-ray attenuation, such as bones, microcalcifications or metal. The focus of the current work is on reduction of ghosting artifacts produced by bones in the musculoskeletal tomosynthesis. A novel dissimilarity concept and a modified backprojection with an adaptive spatially dependent weighting scheme (ωBP) are proposed. Simulated data of software phantom, a structured hardware phantom and a human hand raw-data acquired with a Siemens Mammomat Inspiration tomosynthesis system were reconstructed using conventional backprojection algorithm and the new ωBP-algorithm. The comparison of the results to the non-weighted case demonstrates the potential of the proposed weighted backprojection to reduce the blurring artifacts in musculoskeletal DT. The proposed weighting scheme is not limited to the tomosynthesis limitedangle geometry. It can also be adapted for Computed Tomography (CT) and included in iterative reconstruction algorithms (e.g. SART).
An adaptive motion estimation scheme for video coding.
Liu, Pengyu; Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.
Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems
NASA Astrophysics Data System (ADS)
Oden, J. T.
1992-08-01
This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.
Optimal time step for incompressible SPH
NASA Astrophysics Data System (ADS)
Violeau, Damien; Leroy, Agnès
2015-05-01
A classical incompressible algorithm for Smoothed Particle Hydrodynamics (ISPH) is analyzed in terms of critical time step for numerical stability. For this purpose, a theoretical linear stability analysis is conducted for unbounded homogeneous flows, leading to an analytical formula for the maximum CFL (Courant-Friedrichs-Lewy) number as a function of the Fourier number. This gives the maximum time step as a function of the fluid viscosity, the flow velocity scale and the SPH discretization size (kernel standard deviation). Importantly, the maximum CFL number at large Reynolds number appears twice smaller than with the traditional Weakly Compressible (WCSPH) approach. As a consequence, the optimal time step for ISPH is only five times larger than with WCSPH. The theory agrees very well with numerical data for two usual kernels in a 2-D periodic flow. On the other hand, numerical experiments in a plane Poiseuille flow show that the theory overestimates the maximum allowed time step for small Reynolds numbers.
An adaptive identification and control scheme for large space structures
NASA Technical Reports Server (NTRS)
Carroll, J. V.
1988-01-01
A unified identification and control scheme capable of achieving space at form performance objectives under nominal or failure conditions is described. Preliminary results are also presented, showing that the methodology offers much promise for effective robust control of large space structures. The control method is a multivariable, adaptive, output predictive controller called Model Predictive Control (MPC). MPC uses a state space model and input reference trajectories of set or tracking points to adaptively generate optimum commands. For a fixed model, MPC processes commands with great efficiency, and is also highly robust. A key feature of MPC is its ability to control either nonminimum phase or open loop unstable systems. As an output controller, MPC does not explicitly require full state feedback, as do most multivariable (e.g., Linear Quadratic) methods. Its features are very useful in LSS operations, as they allow non-collocated actuators and sensors. The identification scheme is based on canonical variate analysis (CVA) of input and output data. The CVA technique is particularly suited for the measurement and identification of structural dynamic processes - that is, unsteady transient or dynamically interacting processes such as between aerodynamics and structural deformation - from short, noisy data. CVA is structured so that the identification can be done in real or near real time, using computationally stable algorithms. Modeling LSS dynamics in 1-g laboratories has always been a major impediment not only to understanding their behavior in orbit, but also to controlling it. In cases where the theoretical model is not confirmed, current methods provide few clues concerning additional dynamical relationships that are not included in the theoretical models. CVA needs no a priori model data, or structure; all statistically significant dynamical states are determined using natural, entropy-based methods. Heretofore, a major limitation in applying adaptive
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities.
Robust discretizations versus increase of the time step for the Lorenz system
NASA Astrophysics Data System (ADS)
Letellier, Christophe; Mendes, Eduardo M. A. M.
2005-03-01
When continuous systems are discretized, their solutions depend on the time step chosen a priori. Such solutions are not necessarily spurious in the sense that they can still correspond to a solution of the differential equations but with a displacement in the parameter space. Consequently, it is of great interest to obtain discrete equations which are robust even when the discretization time step is large. In this paper, different discretizations of the Lorenz system are discussed versus the values of the discretization time step. It is shown that the sets of difference equations proposed are more robust versus increases of the time step than conventional discretizations built with standard schemes such as the forward Euler, backward Euler, or centered finite difference schemes. The nonstandard schemes used here are Mickens' scheme and Monaco and Normand-Cyrot's scheme.
Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme
NASA Astrophysics Data System (ADS)
Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi
2004-02-01
A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown
Low color distortion adaptive dimming scheme for power efficient LCDs
NASA Astrophysics Data System (ADS)
Nam, Hyoungsik; Song, Eun-Ji
2013-06-01
This paper demonstrates the color compensation algorithm to reduce the color distortion caused by mismatches between the reference gamma value of a dimming algorithm and the display gamma values of an LCD panel in a low power adaptive dimming scheme. In 2010, we presented the YrYgYb algorithm, which used the display gamma values extracted from the luminance data of red, green, and blue sub-pixels, Yr, Yg, and Yb, with the simulation results. It was based on the ideal panel model where the color coordinates were maintained at the fixed values over the gray levels. Whereas, this work introduces an XrYgZb color compensation algorithm which obtains the display gamma values of red, green, and blue from the different tri-stimulus data of Xr, Yg, and Zb, to obtain further reduction on the color distortion. Both simulation and measurement results ensure that a XrYgZb algorithm outperforms a previous YrYgYb algorithm. In simulation which has been conducted at the practical model derived from the measured data, the XrYgZb scheme achieves lower maximum and average color difference values of 3.7743 and 0.6230 over 24 test picture images, compared to 4.864 and 0.7156 in the YrYgYb one. In measurement of a 19-inch LCD panel, the XrYgZb method also accomplishes smaller color difference values of 1.444072 and 5.588195 over 49 combinations of red, green, and blue data, compared to 1.50578 and 6.00403 of the YrYgYb at the backlight dimming ratios of 0.85 and 0.4.
Adaptive codebook selection schemes for image classification in correlated channels
NASA Astrophysics Data System (ADS)
Hu, Chia Chang; Liu, Xiang Lian; Liu, Kuan-Fu
2015-09-01
The multiple-input multiple-output (MIMO) system with the use of transmit and receive antenna arrays achieves diversity and array gains via transmit beamforming. Due to the absence of full channel state information (CSI) at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent back to the transmitter by a low-rate feedback channel, called limited feedback beamforming. One of the key roles of Vector Quantization (VQ) is how to generate a good codebook such that the distortion between the original image and the reconstructed image is the minimized. In this paper, a novel adaptive codebook selection scheme for image classification is proposed with taking both spatial and temporal correlation inherent in the channel into consideration. The new codebook selection algorithm is developed to select two codebooks from the discrete Fourier transform (DFT) codebook, the generalized Lloyd algorithm (GLA) codebook and the Grassmannian codebook to be combined and used as candidates of the original image and the reconstructed image for image transmission. The channel is estimated and divided into four regions based on the spatial and temporal correlation of the channel and an appropriate codebook is assigned to each region. The proposed method can efficiently reduce the required information of feedback under the spatially and temporally correlated channels, where each region is adaptively. Simulation results show that in the case of temporally and spatially correlated channels, the bit-error-rate (BER) performance can be improved substantially by the proposed algorithm compared to the one with only single codebook.
Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1987-01-01
An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S.
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Finn, John M.
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
ERIC Educational Resources Information Center
Johnson, Burke; Strodl, Peter
This paper presents a sensitizing conceptual scheme for examining interpersonal adaptation in urban classrooms. The construct "interpersonal adaptation" is conceptualized as the interaction of individual/personality factors, interpersonal factors, and social/cultural factors. The model is applied to the urban school. The conceptual scheme asserts…
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets
NASA Technical Reports Server (NTRS)
Cheung, K-M.; Smyth, P.
1993-01-01
Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.
Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation.
Ilie, Silvana
2012-12-21
Stochastic modeling is essential for an accurate description of the biochemical network dynamics at the level of a single cell. Biochemically reacting systems often evolve on multiple time-scales, thus their stochastic mathematical models manifest stiffness. Stochastic models which, in addition, are stiff and computationally very challenging, therefore the need for developing effective and accurate numerical methods for approximating their solution. An important stochastic model of well-stirred biochemical systems is the chemical Langevin Equation. The chemical Langevin equation is a system of stochastic differential equation with multidimensional non-commutative noise. This model is valid in the regime of large molecular populations, far from the thermodynamic limit. In this paper, we propose a variable time-stepping strategy for the numerical solution of a general chemical Langevin equation, which applies for any level of randomness in the system. Our variable stepsize method allows arbitrary values of the time-step. Numerical results on several models arising in applications show significant improvement in accuracy and efficiency of the proposed adaptive scheme over the existing methods, the strategies based on halving/doubling of the stepsize and the fixed step-size ones.
A Rate Adaptation Scheme According to Channel Conditions in Wireless LANs
NASA Astrophysics Data System (ADS)
Numoto, Daisuke; Inai, Hiroshi
Rate adaptation in wireless LANs is to select the most suitable transmission rate automatically according to channel condition. If the channel condition is good, a station can choose a higher transmission rate, otherwise, it should choose a lower but noise-resistant transmission rate. Since IEEE 802.11 does not specify any rate adaptation scheme, several schemes have been proposed. However those schemes provide low throughput or unfair transmission opportunities among stations especially when the number of stations increases. In this paper, we propose a rate adaptation scheme under which the transmission rate quickly closes and then stays around an optimum rate even in the presence of a large number of stations. Via simulation, our scheme provides higher throughput than existing ones and almost equal fairness.
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An adaptive interpolation scheme for molecular potential energy surfaces.
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-28
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. PMID:27586901
An adaptive interpolation scheme for molecular potential energy surfaces.
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-28
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An adaptive additive inflation scheme for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Adaptive Directional Multicast Scheme in mmWave WPANs with Directional Antennas
NASA Astrophysics Data System (ADS)
Shin, Kyungchul; Kim, Youngsun; Kang, Chul-Hee
This letter considers problems with an efficient link layer multicasting technique in a wireless personal area network environment using a directional antenna. First, we propose an adaptive directional multicast scheme (ADMS) for delay-sensitive applications in mmWave WPAN with directional antenna. Second, the proposed ADMS aims to improve throughput as well as satisfy the application-specific delay requirements. We evaluate the performances of legacy Medium Access Control, Life Centric Approach, and adaptive directional multicast schemes via QualNet 5.0. Our results show that the proposed scheme provides better performance in terms of total network throughput, average transmission time, packet delivery ratio and decodable frame ratio.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
An adaptive hierarchical sensing scheme for sparse signals
NASA Astrophysics Data System (ADS)
Schütze, Henry; Barth, Erhardt; Martinetz, Thomas
2014-02-01
In this paper, we present Adaptive Hierarchical Sensing (AHS), a novel adaptive hierarchical sensing algorithm for sparse signals. For a given but unknown signal with a sparse representation in an orthogonal basis, the sensing task is to identify its non-zero transform coefficients by performing only few measurements. A measurement is simply the inner product of the signal and a particular measurement vector. During sensing, AHS partially traverses a binary tree and performs one measurement per visited node. AHS is adaptive in the sense that after each measurement a decision is made whether the entire subtree of the current node is either further traversed or omitted depending on the measurement value. In order to acquire an N -dimensional signal that is K-sparse, AHS performs O(K log N/K) measurements. With AHS, the signal is easily reconstructed by a basis transform without the need to solve an optimization problem. When sensing full-size images, AHS can compete with a state-of-the-art compressed sensing approach in terms of reconstruction performance versus number of measurements. Additionally, we simulate the sensing of image patches by AHS and investigate the impact of the choice of the sparse coding basis as well as the impact of the tree composition.
Adaptive regularized scheme for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Tang, Sizhang; Shen, Chaomin; Zhang, Guixu
2016-06-01
We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-L1 term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Design of adaptive steganographic schemes for digital images
NASA Astrophysics Data System (ADS)
Filler, Tomás; Fridrich, Jessica
2011-02-01
Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance. However, the distortion functions are designed heuristically and the resulting steganographic algorithms are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters with respect to a chosen detection metric and feature space. We show that the size of the margin between support vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.
A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974
A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day.
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
NASA Astrophysics Data System (ADS)
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Adaptive fuzzy sliding mode control scheme for uncertain systems
NASA Astrophysics Data System (ADS)
Noroozi, Navid; Roopaei, Mehdi; Jahromi, M. Zolghadri
2009-11-01
Most physical systems inherently contain nonlinearities which are commonly unknown to the system designer. Therefore, in modeling and analysis of such dynamic systems, one needs to handle unknown nonlinearities and/or uncertain parameters. This paper proposes a new adaptive tracking fuzzy sliding mode controller for a class of nonlinear systems in the presence of uncertainties and external disturbances. The main contribution of the proposed method is that the structure of the controlled system is partially unknown and does not require the bounds of uncertainty and disturbance of the system to be known; meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure systems is also eliminated without deteriorating the system robustness. The performance of the proposed approach is evaluated for two well-known benchmark problems. The simulation results illustrate the effectiveness of our proposed controller.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Wei; Zhang, Zhenguo; Chen, Xiaofei
2015-07-01
A discontinuous grid finite-difference (FD) method with non-uniform time step Runge-Kutta scheme on curvilinear collocated-grid is developed for seismic wave simulation. We introduce two transition zones: a spatial transition zone and a temporal transition zone, to exchange wavefield across the spatial and temporal discontinuous interfaces. A Gaussian filter is applied to suppress artificial numerical noise caused by down-sampling the wavefield from the finer grid to the coarser grid. We adapt the non-uniform time step Runge-Kutta scheme to a discontinuous grid FD method for further increasing the computational efficiency without losing the accuracy of time marching through the whole simulation region. When the topography is included in the modelling, we carry out the discontinuous grid method on a curvilinear collocated-grid to obtain a sufficiently accurate free-surface boundary condition implementation. Numerical tests show that the proposed method can sufficiently accurately simulate the seismic wave propagation on such grids and significantly reduce the computational resources consumption with respect to regular grids.
ERIC Educational Resources Information Center
Norman, D. A.; And Others
"Machine controlled adaptive training is a promising concept. In adaptive training the task presented to the trainee varies as a function of how well he performs. In machine controlled training, adaptive logic performs a function analogous to that performed by a skilled operator." This study looks at the ways in which gain-effective time constant…
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species
A Self-Adaptive Behavior-Aware Recruitment Scheme for Participatory Sensing.
Zeng, Yuanyuan; Li, Deshi
2015-01-01
Participatory sensing services utilizing the abundant social participants with sensor-enabled handheld smart device resources are gaining high interest nowadays. One of the challenges faced is the recruitment of participants by fully utilizing their daily activity behavior with self-adaptiveness toward the realistic application scenarios. In the paper, we propose a self-adaptive behavior-aware recruitment scheme for participatory sensing. People are assumed to join the sensing tasks along with their daily activity without pre-defined ground truth or any instructions. The scheme is proposed to model the tempo-spatial behavior and data quality rating to select participants for participatory sensing campaign. Based on this, the recruitment is formulated as a linear programming problem by considering tempo-spatial coverage, data quality, and budget. The scheme enables one to check and adjust the recruitment strategy adaptively according to application scenarios. The evaluations show that our scheme provides efficient sensing performance as stability, low-cost, tempo-spatial correlation and self-adaptiveness. PMID:26389910
A Self-Adaptive Behavior-Aware Recruitment Scheme for Participatory Sensing
Zeng, Yuanyuan; Li, Deshi
2015-01-01
Participatory sensing services utilizing the abundant social participants with sensor-enabled handheld smart device resources are gaining high interest nowadays. One of the challenges faced is the recruitment of participants by fully utilizing their daily activity behavior with self-adaptiveness toward the realistic application scenarios. In the paper, we propose a self-adaptive behavior-aware recruitment scheme for participatory sensing. People are assumed to join the sensing tasks along with their daily activity without pre-defined ground truth or any instructions. The scheme is proposed to model the tempo-spatial behavior and data quality rating to select participants for participatory sensing campaign. Based on this, the recruitment is formulated as a linear programming problem by considering tempo-spatial coverage, data quality, and budget. The scheme enables one to check and adjust the recruitment strategy adaptively according to application scenarios. The evaluations show that our scheme provides efficient sensing performance as stability, low-cost, tempo-spatial correlation and self-adaptiveness. PMID:26389910
Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC
NASA Astrophysics Data System (ADS)
Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo
We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.
A stable interface element scheme for the p-adaptive lifting collocation penalty formulation
NASA Astrophysics Data System (ADS)
Cagnone, J. S.; Nadarajah, S. K.
2012-02-01
This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model
Displacement in the parameter space versus spurious solution of discretization with large time step
NASA Astrophysics Data System (ADS)
Mendes, Eduardo; Letellier, Christophe
2004-01-01
In order to investigate a possible correspondence between differential and difference equations, it is important to possess discretization of ordinary differential equations. It is well known that when differential equations are discretized, the solution thus obtained depends on the time step used. In the majority of cases, such a solution is considered spurious when it does not resemble the expected solution of the differential equation. This often happens when the time step taken into consideration is too large. In this work, we show that, even for quite large time steps, some solutions which do not correspond to the expected ones are still topologically equivalent to solutions of the original continuous system if a displacement in the parameter space is considered. To reduce such a displacement, a judicious choice of the discretization scheme should be made. To this end, a recent discretization scheme, based on the Lie expansion of the original differential equations, proposed by Monaco and Normand-Cyrot will be analysed. Such a scheme will be shown to be sufficient for providing an adequate discretization for quite large time steps compared to the pseudo-period of the underlying dynamics.
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
NASA Astrophysics Data System (ADS)
Wood, William Alfred, III
production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.
NASA Astrophysics Data System (ADS)
den, M.; Yamashita, K.; Ogawa, T.
A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
NASA Astrophysics Data System (ADS)
Chen, Ying; Shen, Jie
2016-03-01
In this paper we develop a fully adaptive energy stable scheme for Cahn-Hilliard Navier-Stokes system, which is a phase-field model for two-phase incompressible flows, consisting a Cahn-Hilliard-type diffusion equation and a Navier-Stokes equation. This scheme, which is decoupled and unconditionally energy stable based on stabilization, involves adaptive mesh, adaptive time and a nonlinear multigrid finite difference method. Numerical experiments are carried out to validate the scheme for problems with matched density and non-matched density, and also demonstrate that CPU time can be significantly reduced with our adaptive approach.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Zhu, Chuan; Wang, Yao; Han, Guangjie; Rodrigues, Joel J P C; Lloret, Jaime
2014-01-01
This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327
An adaptive scaling and biasing scheme for OFDM-based visible light communication systems.
Wang, Zhaocheng; Wang, Qi; Chen, Sheng; Hanzo, Lajos
2014-05-19
Orthogonal frequency-division multiplexing (OFDM) has been widely used in visible light communication systems to achieve high-rate data transmission. Due to the nonlinear transfer characteristics of light emitting diodes (LEDs) and owing the high peak-to-average-power ratio of OFDM signals, the transmitted signal has to be scaled and biased before modulating the LEDs. In this contribution, an adaptive scaling and biasing scheme is proposed for OFDM-based visible light communication systems, which fully exploits the dynamic range of the LEDs and improves the achievable system performance. Specifically, the proposed scheme calculates near-optimal scaling and biasing factors for each specific OFDM symbol according to the distribution of the signals, which strikes an attractive trade-off between the effective signal power and the clipping-distortion power. Our simulation results demonstrate that the proposed scheme significantly improves the performance without changing the LED's emitted power, while maintaining the same receiver structure. PMID:24921387
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-08-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Novel calibration and color adaptation schemes in three-fringe RGB photoelasticity
NASA Astrophysics Data System (ADS)
Swain, Digendranath; Thomas, Binu P.; Philip, Jeby; Pillai, S. Annamala
2015-03-01
Isochromatic demodulation in digital photoelasticity using RGB calibration is a two step process. The first step involves the construction of a look-up table (LUT) from a calibration experiment. In the second step, isochromatic data is demodulated by matching the colors of an analysis image with the colors existing in the LUT. As actual test and calibration experiment tint conditions vary due to different sources, color adaptation techniques for modifying an existing primary LUT are employed. However, the primary LUT is still generated from bending experiments. In this paper, RGB demodulation based on a theoretically constructed LUT has been attempted to exploit the advantages of color adaptation schemes. Thereby, the experimental mode of LUT generation and some uncertainties therein can be minimized. Additionally, a new color adaptation algorithm is proposed using quadratic Lagrangian interpolation polynomials, which is numerically better than the two-point linear interpolations available in the literature. The new calibration and color adaptation schemes are validated and applied to demodulate fringe orders in live models and stress frozen slices.
Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Batcho, Paul F.; Case, David A.; Schlick, Tamar
2001-09-01
We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.
Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1991-01-01
A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.
Modeling solute transport in distribution networks with variable demand and time step sizes.
Peyton, Chad E.; Bilisoly, Roger Lee; Buchberger, Steven G.; McKenna, Sean Andrew; Yarrington, Lane
2004-06-01
The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-03-30
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
A fifth-order finite difference scheme for hyperbolic equations on block-adaptive curvilinear grids
NASA Astrophysics Data System (ADS)
Chen, Yuxi; Tóth, Gábor; Gombosi, Tamas I.
2016-01-01
We present a new fifth-order accurate finite difference method for hyperbolic equations on block-adaptive curvilinear grids. The scheme employs the 5th order accurate monotonicity preserving limiter MP5 to construct high order accurate face fluxes. The fifth-order accuracy of the spatial derivatives is ensured by a flux correction step. The method is generalized to curvilinear grids with a free-stream preserving discretization. It is also extended to block-adaptive grids using carefully designed ghost cell interpolation algorithms. Only three layers of ghost cells are required, and the grid blocks can be as small as 6 × 6 × 6 cells. Dynamic grid refinement and coarsening are also fifth-order accurate. All interpolation algorithms employ a general limiter based on the principles of the MP5 limiter. The finite difference scheme is fully conservative on static uniform grids. Conservation is only maintained at the truncation error level at grid resolution changes and during grid adaptation, but our numerical tests indicate that the results are still very accurate. We demonstrate the capabilities of the new method on a number of numerical tests, including smooth but non-linear problems as well as simulations involving discontinuities.
NASA Astrophysics Data System (ADS)
Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas
2002-08-01
We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.
Dynamical multiple-time stepping methods for overcoming resonance instabilities.
Chin, Siu A
2004-01-01
Current molecular dynamics simulations of biomolecules using multiple time steps to update the slowly changing force are hampered by instabilities beginning at time steps near the half period of the fastest vibrating mode. These "resonance" instabilities have became a critical barrier preventing the long time simulation of biomolecular dynamics. Attempts to tame these instabilities by altering the slowly changing force and efforts to damp them out by Langevin dynamics do not address the fundamental cause of these instabilities. In this work, we trace the instability to the nonanalytic character of the underlying spectrum and show that a correct splitting of the Hamiltonian, which renders the spectrum analytic, restores stability. The resulting Hamiltonian dictates that in addition to updating the momentum due to the slowly changing force, one must also update the position with a modified mass. Thus multiple-time stepping must be done dynamically.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.
An Adaptive Fault-Tolerant Communication Scheme for Body Sensor Networks
Wu, Guowei; Ren, Jiankang; Xia, Feng; Xu, Zichuan
2010-01-01
A high degree of reliability for critical data transmission is required in body sensor networks (BSNs). However, BSNs are usually vulnerable to channel impairments due to body fading effect and RF interference, which may potentially cause data transmission to be unreliable. In this paper, an adaptive and flexible fault-tolerant communication scheme for BSNs, namely AFTCS, is proposed. AFTCS adopts a channel bandwidth reservation strategy to provide reliable data transmission when channel impairments occur. In order to fulfill the reliability requirements of critical sensors, fault-tolerant priority and queue are employed to adaptively adjust the channel bandwidth allocation. Simulation results show that AFTCS can alleviate the effect of channel impairments, while yielding lower packet loss rate and latency for critical sensors at runtime. PMID:22163428
Dual-Time Stepping Method for Solar Wind Model in Spherical Coordinates
NASA Astrophysics Data System (ADS)
Feng, X. S.
2014-12-01
In this paper, an implicit dual-time stepping scheme based on the finite volume method in spherical coordinates with a six-component grid system is developed to model the steady state solar wind ambient. The base numerical scheme is established by splitting the magnetohydrodynamics equations into a fluid part and a magnetic part, and a finite volume method is used for the fluid part and the constrained-transport method that can maintain the divergence-free constraint on the magnetic field is used for the magnetic induction part. By adding a pseudo-time derivative to the magnetohydrodynamics equations for solar wind plasma, the governing equations are solved implicitly at each physical time step by advancing in pseudo time. As validation, solar wind ambient for Carrington rotations for CR 1915 (solar minimum), CR 1930 (rising phase), CR 1965 (solar maximum) and CR 2030 (declining phase) have been studied. Numerical tests with different Courant factors show its capability of producing structured solar wind, and that the physical time step can be enlarged to be one hundred times that of the original one. Of importance, our numerical results have demonstrated overall good agreements with observations.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
An adaptive high-order hybrid scheme for compressive, viscous flows with detailed chemistry
NASA Astrophysics Data System (ADS)
Ziegler, Jack L.; Deiterding, Ralf; Shepherd, Joseph E.; Pullin, D. I.
2011-08-01
A hybrid weighted essentially non-oscillatory (WENO)/centered-difference numerical method, with low numerical dissipation, high-order shock-capturing, and structured adaptive mesh refinement (SAMR), has been developed for the direct numerical simulation of the multicomponent, compressible, reactive Navier-Stokes equations. The method enables accurate resolution of diffusive processes within reaction zones. The approach combines time-split reactive source terms with a high-order, shock-capturing scheme specifically designed for diffusive flows. A description of the order-optimized, symmetric, finite difference, flux-based, hybrid WENO/centered-difference scheme is given, along with its implementation in a high-order SAMR framework. The implementation of new techniques for discontinuity flagging, scheme-switching, and high-order prolongation and restriction is described. In particular, the refined methodology does not require upwinded WENO at grid refinement interfaces for stability, allowing high-order prolongation and thereby eliminating a significant source of numerical diffusion within the overall code performance. A series of one-and two-dimensional test problems is used to verify the implementation, specifically the high-order accuracy of the diffusion terms. One-dimensional benchmarks include a viscous shock wave and a laminar flame. In two-space dimensions, a Lamb-Oseen vortex and an unstable diffusive detonation are considered, for which quantitative convergence is demonstrated. Further, a two-dimensional high-resolution simulation of a reactive Mach reflection phenomenon with diffusive multi-species mixing is presented.
Scheduling and adaptation of London's future water supply and demand schemes under uncertainty
NASA Astrophysics Data System (ADS)
Huskova, Ivana; Matrosov, Evgenii S.; Harou, Julien J.; Kasprzyk, Joseph R.; Reed, Patrick M.
2015-04-01
The changing needs of society and the uncertainty of future conditions complicate the planning of future water infrastructure and its operating policies. These systems must meet the multi-sector demands of a range of stakeholders whose objectives often conflict. Understanding these conflicts requires exploring many alternative plans to identify possible compromise solutions and important system trade-offs. The uncertainties associated with future conditions such as climate change and population growth challenge the decision making process. Ideally planners should consider portfolios of supply and demand management schemes represented as dynamic trajectories over time able to adapt to the changing environment whilst considering many system goals and plausible futures. Decisions can be scheduled and adapted over the planning period to minimize the present cost of portfolios while maintaining the supply-demand balance and ecosystem services as the future unfolds. Yet such plans are difficult to identify due to the large number of alternative plans to choose from, the uncertainty of future conditions and the computational complexity of such problems. Our study optimizes London's future water supply system investments as well as their scheduling and adaptation over time using many-objective scenario optimization, an efficient water resource system simulator, and visual analytics for exploring key system trade-offs. The solutions are compared to Pareto approximate portfolios obtained from previous work where the composition of infrastructure portfolios that did not change over the planning period. We explore how the visual analysis of solutions can aid decision making by investigating the implied performance trade-offs and how the individual schemes and their trajectories present in the Pareto approximate portfolios affect the system's behaviour. By doing so decision makers are given the opportunity to decide the balance between many system goals a posteriori as well as
NASA Astrophysics Data System (ADS)
Pathak, Harshavardhana S.; Shukla, Ratnesh K.
2016-08-01
A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of
On the Time Step Error of the DSMC
NASA Astrophysics Data System (ADS)
Hokazono, Tomokuni; Kobayashi, Seijiro; Ohsawa, Tomoki; Ohwada, Taku
2003-05-01
The time step truncation error of the DSMC is examined numerically. Contrary to the claim of [S.V. Bogomolov, U.S.S.R. Comput. Math. Math. Phys., Vol. 28, 79 (1988)] and in agreement with that of [T. Ohwada, J. Compt. Phys., Vol. 139, 1 (1998)], it is demonstrated that the error of the conventional DSMC per time step Δt is not O(Δt3) but O(Δt2). Further, it is shown that the error of the DSMC is reduced to O(Δt3) by applying Strang's splitting for the partial differential equations to the Boltzmann equation. The error resulting from the boundary condition, which is not studied in the abovementioned theoretical studies, is also discussed.
NASA Astrophysics Data System (ADS)
Kuraz, Michal
2016-06-01
Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2005-01-01
The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.
A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid
NASA Astrophysics Data System (ADS)
Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-01
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Development of an adaptive tsetse population management scheme for the Luke community, Ethiopia.
Sciarretta, Andrea; Girma, Melaku; Tikubet, Getachew; Belayehun, Lulseged; Ballo, Shifa; Baumgärtner, Johann
2005-11-01
Since 1996, tsetse (Glossina spp.) control operations, using odor-baited traps, have been carried out in the Luke area of Gurage zone, southwestern Ethiopia. Glossina morsitans submorsitans Newstead was identified as the dominant species in the area, but the presence of Glossina fuscipes Newstead and Glossina pallidipes Austen also was recorded. Here, we refer to the combined number of these three species and report the work undertaken from October 2002 to October 2004 to render the control system more efficient by reducing the number of traps used and maintaining the previously reached levels of tsetse occurrence and trypanosomiasis prevalence. This was done by the design and implementation of an adaptive tsetse population management system. It consists first of an efficient community-participatory monitoring scheme that allowed us to reduce the number of traps used from 216 to 127 (107 monitoring traps and 20 control traps). Geostatistical methods, including kriging and mapping, furthermore allowed identification and monitoring of the spatiotemporal dynamics of patches with increased fly densities, referred to as hot spots. To respond to hot spots, the Luke community was advised and assisted in control trap deployment. Adaptive management was shown to be more efficient than the previously used mass trapping system. In that context, trap numbers could be reduced substantially, at the same time maintaining previously achieved levels of tsetse occurrences and disease prevalence.
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-01-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983
An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures
NASA Technical Reports Server (NTRS)
Sun, Joy Z.; Josh, Suresh M.
2009-01-01
The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Thermodynamics and kinetics of large-time-step molecular dynamics.
Rao, Francesco; Spichty, Martin
2012-02-15
Molecular dynamics (MD) simulations provide essential information about the thermodynamics and kinetics of proteins. Technological advances in both hardware and algorithms have seen this method accessing timescales that used to be unreachable only few years ago. The quest to simulate slow, biologically relevant macromolecular conformational changes, is still open. Here, we present an approximate approach to increase the speed of MD simulations by a factor of ∼4.5. This is achieved by using a large integration time step of 7 fs, in combination with frozen covalent bonds and look-up tables for nonbonded interactions of the solvent. Extensive atomistic MD simulations for a flexible peptide in water show that the approach reproduces the peptide's equilibrium conformational changes, preserving the essential properties of both thermodynamics and kinetics. Comparison of this approximate method with state-of-the-art implicit solvation simulations indicates that the former provides a better description of the underlying free-energy surface. Finally, simulations of a 33-residue peptide show that these fast MD settings are readily applicable to investigate biologically relevant systems.
A massively parallel adaptive scheme for melt migration in geodynamics computations
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo; Grove, Ryan
2016-04-01
Melt generation and migration are important processes for the evolution of the Earth's interior and impact the global convection of the mantle. While they have been the subject of numerous investigations, the typical time and length-scales of melt transport are vastly different from global mantle convection, which determines where melt is generated. This makes it difficult to study mantle convection and melt migration in a unified framework. In addition, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. We describe our extension of the community mantle convection code ASPECT that adds equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects, and it incorporates the individual compressibilities of the solid and the fluid phase. For this, we derive an accurate and stable Finite Element scheme that can be combined with adaptive mesh refinement. This is particularly advantageous for this type of problem, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. Furthermore, scalable iterative linear solvers are required to solve the large linear systems arising from the discretized system. Finally, we present benchmarks and scaling tests of our solver up to tens of thousands of cores, show the effectiveness of adaptive mesh refinement when applied to melt migration and compare the
NASA Astrophysics Data System (ADS)
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Operational flood control of a low-lying delta system using large time step Model Predictive Control
NASA Astrophysics Data System (ADS)
Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick
2015-01-01
The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.
NASA Astrophysics Data System (ADS)
Qian, Xiaoliang; Schlick, Tamar
2002-04-01
We develop an efficient multiple-time-step force splitting scheme for particle-mesh-Ewald molecular dynamics simulations. Our method exploits smooth switch functions effectively to regulate direct and reciprocal space terms for the electrostatic interactions. The reciprocal term with the near field contributions removed is assigned to the slow class; the van der Waals and regulated particle-mesh-Ewald direct-space terms, each associated with a tailored switch function, are assigned to the medium class. All other bonded terms are assigned to the fast class. This versatile protocol yields good stability and accuracy for Newtonian algorithms, with temperature and pressure coupling, as well as for Langevin dynamics. Since the van der Waals interactions need not be cut at short distances to achieve moderate speedup, this integrator represents an enhancement of our prior multiple-time-step implementation for microcanonical ensembles. Our work also tests more rigorously the stability of such splitting schemes, in combination with switching methodology. Performance of the algorithms is optimized and tested on liquid water, solvated DNA, and solvated protein systems over 400 ps or longer simulations. With a 6 fs outer time step, we find computational speedup ratios of over 6.5 for Newtonian dynamics, compared with 0.5 fs single-time-step simulations. With modest Langevin damping, an outer time step of up to 16 fs can be used with a speedup ratio of 7.5. Theoretical analyses in our appendices produce guidelines for choosing the Langevin damping constant and show the close relationship among the leapfrog Verlet, velocity Verlet, and position Verlet variants.
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
A simple method for improving the time-stepping accuracy in atmosphere and ocean models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-12-01
In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed.
NASA Astrophysics Data System (ADS)
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1984-01-01
A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.
Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.
Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura
2016-02-01
Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413
NASA Astrophysics Data System (ADS)
Mulder, W. A.; Zhebel, E.; Minisini, S.
2014-02-01
We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and conditionally stable, leads to a fully explicit scheme. We provide estimates of its stability limit for simple cases, namely, the reference element with Neumann boundary conditions, its distorted version of arbitrary shape, the unit cube that can be partitioned into six tetrahedra with periodic boundary conditions and its distortions. The Courant-Friedrichs-Lewy stability limit contains an element diameter for which we considered different options. The one based on the sum of the eigenvalues of the spatial operator for the first-degree mass-lumped element gives the best results. It resembles the diameter of the inscribed sphere but is slightly easier to compute. The stability estimates show that the mass-lumped continuous and the discontinuous Galerkin finite elements of degree 2 have comparable stability conditions, whereas the mass-lumped elements of degree one and three allow for larger time steps.
Zhang, Jie; Ni, Ming-Jiu
2014-01-01
The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interface that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.
Li, Ning; Cao, Jinde
2015-01-01
In this paper, we investigate synchronization for memristor-based neural networks with time-varying delay via an adaptive and feedback controller. Under the framework of Filippov's solution and differential inclusion theory, and by using the adaptive control technique and structuring a novel Lyapunov functional, an adaptive updated law was designed, and two synchronization criteria were derived for memristor-based neural networks with time-varying delay. By removing some of the basic literature assumptions, the derived synchronization criteria were found to be more general than those in existing literature. Finally, two simulation examples are provided to illustrate the effectiveness of the theoretical results.
Li, Ning; Cao, Jinde
2015-01-01
In this paper, we investigate synchronization for memristor-based neural networks with time-varying delay via an adaptive and feedback controller. Under the framework of Filippov's solution and differential inclusion theory, and by using the adaptive control technique and structuring a novel Lyapunov functional, an adaptive updated law was designed, and two synchronization criteria were derived for memristor-based neural networks with time-varying delay. By removing some of the basic literature assumptions, the derived synchronization criteria were found to be more general than those in existing literature. Finally, two simulation examples are provided to illustrate the effectiveness of the theoretical results. PMID:25299765
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. PMID:25820090
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b)…
NASA Astrophysics Data System (ADS)
Chow, C. W.; Yeh, C. H.; Liu, Y. F.; Huang, P. Y.; Liu, Y.
2013-04-01
Spectral-efficient orthogonal frequency division multiplexing (OFDM) is a promising modulation format for the light-emitting-diode (LED) optical wireless (OW) visible light communication (VLC). VLC is a directional and line-of-sight communication; hence the offset of the optical receiver (Rx) and the LED light source will result in a large drop of received optical power. In order to keep the same luminance of the LED light source, we propose and demonstrate an adaptive control of the OFDM modulation-order to maintain the VLC transmission performance. Experimental results confirm the feasibility of the proposed scheme.
An unconditionally energy stable finite difference scheme for a stochastic Cahn-Hilliard equation
NASA Astrophysics Data System (ADS)
Li, Xiao; Qiao, ZhongHua; Zhang, Hui
2016-09-01
In this work, the MMC-TDGL equation, a stochastic Cahn-Hilliard equation is solved numerically by using the finite difference method in combination with a convex splitting technique of the energy functional. For the non-stochastic case, we develop an unconditionally energy stable difference scheme which is proved to be uniquely solvable. For the stochastic case, by adopting the same splitting of the energy functional, we construct a similar and uniquely solvable difference scheme with the discretized stochastic term. The resulted schemes are nonlinear and solved by Newton iteration. For the long time simulation, an adaptive time stepping strategy is developed based on both first- and second-order derivatives of the energy. Numerical experiments are carried out to verify the energy stability, the efficiency of the adaptive time stepping and the effect of the stochastic term.
An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems
NASA Astrophysics Data System (ADS)
Heydari, Ali; Balakrishnan, S. N.
2014-12-01
The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.
Zeng, Yuanyuan; Sreenan, Cormac J.; Sitanayah, Lanny; Xiong, Naixue; Park, Jong Hyuk; Zheng, Guilin
2011-01-01
Fire hazard monitoring and evacuation for building environments is a novel application area for the deployment of wireless sensor networks. In this context, adaptive routing is essential in order to ensure safe and timely data delivery in building evacuation and fire fighting resource applications. Existing routing mechanisms for wireless sensor networks are not well suited for building fires, especially as they do not consider critical and dynamic network scenarios. In this paper, an emergency-adaptive, real-time and robust routing protocol is presented for emergency situations such as building fire hazard applications. The protocol adapts to handle dynamic emergency scenarios and works well with the routing hole problem. Theoretical analysis and simulation results indicate that our protocol provides a real-time routing mechanism that is well suited for dynamic emergency scenarios in building fires when compared with other related work. PMID:22163774
Zeng, Yuanyuan; Xiong, Naixue; Park, Jong Hyuk; Zheng, Guilin
2010-01-01
Fire hazard monitoring and evacuation for building environments is a novel application area for the deployment of wireless sensor networks. In this context, adaptive routing is essential in order to ensure safe and timely data delivery in building evacuation and fire fighting resource applications. Existing routing mechanisms for wireless sensor networks are not well suited for building fires, especially as they do not consider critical and dynamic network scenarios. In this paper, an emergency-adaptive, real-time and robust routing protocol is presented for emergency situations such as building fire hazard applications. The protocol adapts to handle dynamic emergency scenarios and works well with the routing hole problem. Theoretical analysis and simulation results indicate that our protocol provides a real-time routing mechanism that is well suited for dynamic emergency scenarios in building fires when compared with other related work. PMID:22219706
A novel data adaptive detection scheme for distributed fiber optic acoustic sensing
NASA Astrophysics Data System (ADS)
Ölçer, Íbrahim; Öncü, Ahmet
2016-05-01
We introduce a new approach for distributed fiber optic sensing based on adaptive processing of phase sensitive optical time domain reflectometry (Φ-OTDR) signals. Instead of conventional methods which utilizes frame averaging of detected signal traces, our adaptive algorithm senses a set of noise parameters to enhance the signal-to-noise ratio (SNR) for improved detection performance. This data set is called the secondary data set from which a weight vector for the detection of a signal is computed. The signal presence is sought in the primary data set. This adaptive technique can be used for vibration detection of health monitoring of various civil structures as well as any other dynamic monitoring requirements such as pipeline and perimeter security applications.
Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik
2013-01-01
A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. PMID:23983640
NASA Astrophysics Data System (ADS)
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
Stevens, D.E.; Bretherton, S.
1996-12-01
This paper presents a new forward-in-time advection method for nearly incompressible flow, MU, and its application to an adaptive multilevel flow solver for atmospheric flows. MU is a modification of Leonard et al.`s UTOPIA scheme. MU, like UTOPIA, is based on third-order accurate semi-Lagrangian multidimensional upwinding for constant velocity flows. for varying velocity fields, MU is a second-order conservative method. MU has greater stability and accuracy than UTOPIA and naturally decomposes into a monotone low-order method and a higher-order accurate correction for use with flux limiting. Its stability and accuracy make it a computationally efficient alternative to current finite-difference advection methods. We present a fully second-order accurate flow solver for the anelastic equations, a prototypical low Mach number flow. The flow solver is based on MU which is used for both momentum and scalar transport equations. This flow solver can also be implemented with any forward-in-time advection scheme. The multilevel flow solver conserves discrete global integrals of advected quantities and includes adaptive mesh refinements. Its second-order accuracy is verified using a nonlinear energy conservation integral for the anelastic equations. For a typical geophysical problem in which the flow is most rapidly varying in a small part of the domain, the multilevel flow solver achieves global accuracy comparable to uniform-resolution simulation for 10% of the computational cost. 36 refs., 10 figs.
Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik
2013-01-01
A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities.
NASA Astrophysics Data System (ADS)
Nercessian, Shahan C.; Panetta, Karen A.; Agaian, Sos S.
2012-04-01
The goal of image fusion is to combine multiple source images obtained using different capture techniques into a single image to provide an effective contextual enhancement of a scene for human or machine perception. In practice, considerable value can be gained in the fusion of images that are dissimilar or complementary in nature. However, in such cases, global weighting schemes may not sufficiently weigh the contribution of the pertinent information of the source images, while existing adaptive schemes calculate weights based on the relative amounts of salient features, which can cause severe artifacting or inadequate local luminance in the fusion result. Accordingly, a new multiscale image fusion algorithm is proposed. The approximation coefficient fusion rule of the algorithm is based on a novel similarity based weighting scheme capable of providing improved fusion results when the input source images are either similar or dissimilar to each other. Moreover, the algorithm employs a new detail coefficient fusion rule integrating a parametric multiscale contrast measure. The parametric nature of the contrast measure allows the degree to which psychophysical laws of human vision hold to be tuned based on image-dependent characteristics. Experimental results illustrate the superior performance of the proposed algorithm qualitatively and quantitatively.
AZEuS: AN ADAPTIVE ZONE EULERIAN SCHEME FOR COMPUTATIONAL MAGNETOHYDRODYNAMICS
Ramsey, Jon P.; Clarke, David A.; Men'shchikov, Alexander B.
2012-03-01
A new adaptive mesh refinement (AMR) version of the ZEUS-3D astrophysical magnetohydrodynamical fluid code, AZEuS, is described. The AMR module in AZEuS has been completely adapted to the staggered mesh that characterizes the ZEUS family of codes on which scalar quantities are zone-centered and vector components are face-centered. In addition, for applications using static grids, it is necessary to use higher-order interpolations for prolongation to minimize the errors caused by waves crossing from a grid of one resolution to another. Finally, solutions to test problems in one, two, and three dimensions in both Cartesian and spherical coordinates are presented.
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-04-25
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
NASA Astrophysics Data System (ADS)
Cox, Christopher; Liang, Chunlei; Plesniak, Michael
2015-11-01
This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.
NASA Astrophysics Data System (ADS)
Cox, Christopher; Liang, Chunlei; Plesniak, Michael W.
2016-06-01
We report development of a high-order compact flux reconstruction method for solving unsteady incompressible flow on unstructured grids with implicit dual time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. This compact high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids.
A new approach for determining the time step when propagating with the Lanczos algorithm
NASA Astrophysics Data System (ADS)
Mohankumar, N.; Carrington, Tucker
2010-11-01
A new criterion for choosing the time step used when numerically solving time-dependent Schroedinger equation with the Lanczos method is presented. Following Saad, Stewart and Leyk, an explicit expression for the time step is obtained from the remainder of the Chebyshev series of the matrix exponential.
A fast and efficient adaptive threshold rate control scheme for remote sensing images.
Chen, Xiao; Xu, Xiaoqing
2012-01-01
The JPEG2000 image compression standard is ideal for processing remote sensing images. However, its algorithm is complex and it requires large amounts of memory, making it difficult to adapt to the limited transmission and storage resources necessary for remote sensing images. In the present study, an improved rate control algorithm for remote sensing images is proposed. The required coded blocks are sorted downward according to their numbers of bit planes prior to entropy coding. An adaptive threshold computed from the combination of the minimum number of bit planes, along with the minimum rate-distortion slope and the compression ratio, is used to truncate passes of each code block during Tier-1 encoding. This routine avoids the encoding of all code passes and improves the coding efficiency. The simulation results show that the computational cost and working buffer memory size of the proposed algorithm reach only 18.13 and 7.81%, respectively, of the same parameters in the postcompression rate distortion algorithm, while the peak signal-to-noise ratio across the images remains almost the same. The proposed algorithm not only greatly reduces the code complexity and buffer requirements but also maintains the image quality.
A Muscle Synergy-Inspired Adaptive Control Scheme for a Hybrid Walking Neuroprosthesis
Alibeji, Naji A.; Kirsch, Nicholas Andrew; Sharma, Nitin
2015-01-01
A hybrid neuroprosthesis that uses an electric motor-based wearable exoskeleton and functional electrical stimulation (FES) has a promising potential to restore walking in persons with paraplegia. A hybrid actuation structure introduces effector redundancy, making its automatic control a challenging task because multiple muscles and additional electric motor need to be coordinated. Inspired by the muscle synergy principle, we designed a low dimensional controller to control multiple effectors: FES of multiple muscles and electric motors. The resulting control system may be less complex and easier to control. To obtain the muscle synergy-inspired low dimensional control, a subject-specific gait model was optimized to compute optimal control signals for the multiple effectors. The optimal control signals were then dimensionally reduced by using principal component analysis to extract synergies. Then, an adaptive feedforward controller with an update law for the synergy activation was designed. In addition, feedback control was used to provide stability and robustness to the control design. The adaptive-feedforward and feedback control structure makes the low dimensional controller more robust to disturbances and variations in the model parameters and may help to compensate for other time-varying phenomena (e.g., muscle fatigue). This is proven by using a Lyapunov stability analysis, which yielded semi-global uniformly ultimately bounded tracking. Computer simulations were performed to test the new controller on a 4-degree of freedom gait model. PMID:26734606
Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter
NASA Astrophysics Data System (ADS)
Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi
2013-03-01
Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.
Kreis, Karsten; Tuckerman, Mark E; Donadio, Davide; Kremer, Kurt; Potestio, Raffaello
2016-07-12
Quantum delocalization of atomic nuclei affects the physical properties of many hydrogen-rich liquids and biological systems even at room temperature. In computer simulations, quantum nuclei can be modeled via the path-integral formulation of quantum statistical mechanics, which implies a substantial increase in computational overhead. By restricting the quantum description to a small spatial region, this cost can be significantly reduced. Herein, we derive a bottom-up, rigorous, Hamiltonian-based scheme that allows molecules to change from quantum to classical and vice versa on the fly as they diffuse through the system, both reducing overhead and making quantum grand-canonical simulations possible. The method is validated via simulations of low-temperature parahydrogen. Our adaptive resolution approach paves the way to efficient quantum simulations of biomolecules, membranes, and interfaces. PMID:27214610
NASA Astrophysics Data System (ADS)
Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.
2016-08-01
This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D
Cumberland, R.; Mesina, G.
2009-01-01
The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
NASA Astrophysics Data System (ADS)
Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz
2015-11-01
We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.
Region of interest based robust watermarking scheme for adaptation in small displays
NASA Astrophysics Data System (ADS)
Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar
2010-02-01
Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.
Inference for optimal dynamic treatment regimes using an adaptive m-out-of-n bootstrap scheme.
Chakraborty, Bibhas; Laber, Eric B; Zhao, Yingqi
2013-09-01
A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example.
Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Kovalenko, Andriy
2013-12-28
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
NASA Astrophysics Data System (ADS)
Omelyan, Igor; Kovalenko, Andriy
2013-12-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
The constant displacement scheme for tracking particles in heterogeneous aquifers
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times faster than using the constant time step scheme.
Numerical time-step restrictions as a result of capillary waves
NASA Astrophysics Data System (ADS)
Denner, Fabian; van Wachem, Berend G. M.
2015-03-01
The propagation of capillary waves on material interfaces between two fluids imposes a strict constraint on the numerical time-step applied to solve the equations governing this problem and is directly associated with the stability of interfacial flow simulations. The explicit implementation of surface tension is the generally accepted reason for the restrictions on the temporal resolution caused by capillary waves. In this article, a fully-coupled numerical framework with an implicit treatment of surface tension is proposed and applied, demonstrating that the capillary time-step constraint is in fact a constraint imposed by the temporal sampling of capillary waves, irrespective of the type of implementation. The presented results show that the capillary time-step constraint can be exceeded by several orders of magnitude, with the explicit as well as the implicit treatment of surface tension, if capillary waves are absent. Furthermore, a revised capillary time-step constraint is derived by studying the temporal resolution of capillary waves based on numerical stability and signal processing theory, including the Doppler shift caused by an underlying fluid motion. The revised capillary time-step constraint assures a robust, aliasing-free result, as demonstrated by representative numerical experiments, and is in the static case less restrictive than previously proposed time-step limits associated with capillary waves.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
NASA Astrophysics Data System (ADS)
Zanotti, O.; Dumbser, M.; Fambri, F.
2016-05-01
We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.
2015-01-01
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts. PMID:24555448
Halleroed, Tomas Rylander, Thomas
2008-04-20
A stable hybridization of the finite-element method (FEM) and the finite-difference time-domain (FDTD) scheme for Maxwell's equations with electric and magnetic losses is presented for two-dimensional problems. The hybrid method combines the flexibility of the FEM with the efficiency of the FDTD scheme and it is based directly on Ampere's and Faraday's law. The electric and magnetic losses can be treated implicitly by the FEM on an unstructured mesh, which allows for local mesh refinement in order to resolve rapid variations in the material parameters and/or the electromagnetic field. It is also feasible to handle larger homogeneous regions with losses by the explicit FDTD scheme connected to an implicitly time-stepped and lossy FEM region. The hybrid method shows second-order convergence for smooth scatterers. The bistatic radar cross section (RCS) for a circular metal cylinder with a lossy coating converges to the analytical solution and an accuracy of 2% is achieved for about 20 points per wavelength. The monostatic RCS for an airfoil that features sharp corners yields a lower order of convergence and it is found to agree well with what can be expected for singular fields at the sharp corners. A careful convergence study with resolutions from 20 to 140 points per wavelength provides accurate extrapolated results for this non-trivial test case, which makes it possible to use as a reference problem for scattering codes that model both electric and magnetic losses.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Modified Chebyshev pseudospectral method with O(N exp -1) time step restriction
NASA Technical Reports Server (NTRS)
Kosloff, Dan; Tal-Ezer, Hillel
1989-01-01
The extreme eigenvalues of the Chebyshev pseudospectral differentiation operator are O(N exp 2) where N is the number of grid points. As a result of this, the allowable time step in an explicit time marching algorithm is O(N exp -2) which, in many cases, is much below the time step dictated by the physics of the partial differential equation. A new set of interpolating points is introduced such that the eigenvalues of the differentiation operator are O(N) and the allowable time step is O(N exp -1). The properties of the new algorithm are similar to those of the Fourier method. The new algorithm also provides a highly accurate solution for non-periodic boundary value problems.
Boosting the accuracy and speed of quantum Monte Carlo: Size consistency and time step
NASA Astrophysics Data System (ADS)
Zen, Andrea; Sorella, Sandro; Gillan, Michael J.; Michaelides, Angelos; Alfè, Dario
2016-06-01
Diffusion Monte Carlo (DMC) simulations for fermions are becoming the standard for providing high-quality reference data in systems that are too large to be investigated via quantum chemical approaches. DMC with the fixed-node approximation relies on modifications of the Green's function to avoid singularities near the nodal surface of the trial wave function. Here we show that these modifications affect the DMC energies in a way that is not size consistent, resulting in large time-step errors. Building on the modifications of Umrigar et al. and DePasquale et al. we propose a simple Green's function modification that restores size consistency to large values of the time step, which substantially reduces time-step errors. This algorithm also yields remarkable speedups of up to two orders of magnitude in the calculation of molecule-molecule binding energies and crystal cohesive energies, thus extending the horizons of what is possible with DMC.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Thermal fatigue: The impact of the length of time step on the amount of stress cycles
NASA Astrophysics Data System (ADS)
Beran, Pavel
2013-10-01
One of the degradation processes in stones and other building materials is caused by cyclic thermal stress. For the determination of the amount and amplitude of the thermal stress cycles may be used numerical simulation. The length of time step during simulation of thermal cycles significantly affected the magnitude and the amount of cycles because the intensity of global solar radiation may vary during the time. The dependence of temperature and stress response of the damaged stone block on the length of time step is described in this paper.
Omelyan, Igor P; Kovalenko, Andriy
2012-01-10
We propose and validate a new multiscale technique, the extrapolative isokinetic Nóse-Hoover chain orientational (EINO) motion multiple time step algorithm for rigid interaction site models of molecular liquids. It nontrivially combines the multiple time step decomposition operator method with a specific extrapolation of intermolecular interactions, complemented by an extended isokinetic Nosé-Hoover chain approach in the presence of translational and orientational degrees of freedom. The EINO algorithm obviates the limitations on time step size in molecular dynamics simulations. While the best existing multistep algorithms can advance from a 5 fs single step to a maximum 100 fs outer step, we show on the basis of molecular dynamics simulations of the TIP4P water that our EINO technique overcomes this barrier. Specifically, we have achieved giant time steps on the order of 500 fs up to 5 ps, which now become available in the study of equilibrium and conformational properties of molecular liquids without a loss of stability and accuracy.
NASA Astrophysics Data System (ADS)
Tsai, T. C.; Chen, J. P.; Dearden, C.
2014-12-01
The wide variety of ice crystal shapes and growth habits makes it a complicated issue in cloud models. This study developed the bulk ice adaptive habit parameterization based on the theoretical approach of Chen and Lamb (1994) and introduced a 6-class hydrometeors double-moment (mass and number) bulk microphysics scheme with gamma-type size distribution function. Both the proposed schemes have been implemented into the Weather Research and Forecasting model (WRF) model forming a new multi-moment bulk microphysics scheme. Two new moments of ice crystal shape and volume are included for tracking pristine ice's adaptive habit and apparent density. A closure technique is developed to solve the time evolution of the bulk moments. For the verification of the bulk ice habit parameterization, some parcel-type (zero-dimension) calculations were conducted and compared with binned numerical calculations. The results showed that: a flexible size spectrum is important in numerical accuracy, the ice shape can significantly enhance the diffusional growth, and it is important to consider the memory of growth habit (adaptive growth) under varying environmental conditions. Also, the derived results with the 3-moment method were much closer to the binned calculations. A field campaign of DIAMET was selected to simulate in the WRF model for real-case studies. The simulations were performed with the traditional spherical ice and the new adaptive shape schemes to evaluate the effect of crystal habits. Some main features of narrow rain band, as well as the embedded precipitation cells, in the cold front case were well captured by the model. Furthermore, the simulations produced a good agreement in the microphysics against the aircraft observations in ice particle number concentration, ice crystal aspect ratio, and deposition heating rate especially within the temperature region of ice secondary multiplication production.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
NASA Technical Reports Server (NTRS)
Garrett, Bruce C.; Swaminathan, P. K.; Murthy, C. S.; Redmon, Michael J.
1987-01-01
A variable time step algorithm has been implemented for solving the stochastic equations of motion for gas-surface collisions. It has been tested for a simple model of electronically inelastic collisions with an insulator surface in which the phonon manifold acts as a heat bath and electronic states are localized. In addition to reproducing the accurate nuclear dynamics of the surface atoms, numerical calculations have shown the algorithm to yield accurate ensemble averages of physical observables such as electronic transition probabilities and total energy loss of the gas atom to the surface. This new algorithm offers a gain in efficieny of up to an order of magnitude compared to fixed time step integration.
Error correction in short time steps during the application of quantum gates
NASA Astrophysics Data System (ADS)
de Castro, L. A.; Napolitano, R. d. J.
2016-04-01
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
Double loop control strategy with different time steps based on human characteristics.
Gu, Gwang Min; Lee, Jinoh; Kim, Jung
2012-01-01
This paper proposes a cooperative control strategy in consideration of the force sensitivity of human. The strategy consists of two loops: one is the intention estimation loop whose sampling time can be variable in order to investigate the effect of the sampling time; the other is the position control loop with fixed time step. A high sampling rate is not necessary for the intention estimation loop due to the bandwidth of the mechanoreceptors in humans. In addition, the force sensor implemented in the robot is sensitive to the noise induced from the sensor itself and tremor of the human. Multiple experiments were performed with the experimental protocol using various time steps of the intention estimation loop to find the suitable sampling times in physical human robot interaction. The task involves pull-and-push movement with a two-degree-of-freedom robot, and the norm of the interaction force was obtained for each experiment as the measure of the cooperative control performance.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
ERIC Educational Resources Information Center
La Malfa, Giampaolo; Lassi, Stefano; Bertelli, Marco; Albertini, Giorgio; Dosen, Anton
2009-01-01
The importance of emotional aspects in developing cognitive and social abilities has already been underlined by many authors even if there is no unanimous agreement on the factors constituting adaptive abilities, nor is there any on the way to measure them or on the relation between adaptive ability and cognitive level. The purposes of this study…
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
NASA Astrophysics Data System (ADS)
Hornby, P. G.
2005-12-01
Understanding chemical and thermal processes taking place in hydrothermal mineral deposition systems could well be a key to unlocking new mineral reserves through improved targeting of exploration efforts. To aid in this understanding it is very helpful to be able to model such processes with sufficient fidelity to test process hypotheses. To gain understanding, it is often sufficient to obtain semi-quantitative results that model the broad aspects of the complex set of thermal and chemical effects taking place in hydrothermal systems. For example, it is often sufficient to gain an understanding of where thermal, geometric and chemical factors converge to precipitate gold (say) without being perfectly precise about how much gold is precipitated. The traditional approach is to use incompressible Darcy flow together with the Boussinesq approximation. From the flow field, the heat equation is used to advect-conduct the heat. The flow field is also used to transport solutes by solving an advection-dispersion-diffusion equation. The reactions in the fluid and between fluid and rock act as source terms for these advection-dispersion equations. Many existing modelling systems that are used for simulating such systems use explicit time marching schemes and finite differences. The disadvantage of this approach is the need to work on rectilinear grids and the number of time steps required by the Courant condition in the solute transport step. The second factor can be particularly significant if the chemical system is complex, requiring (at a minimum) an equilibrium calculation at each grid point at each time step. In the approach we describe, we use finite elements rather than finite differences, and the pressure, heat and advection-dispersion equations are solved implicitly. The general idea is to put unconditional numerical stability of the time integration first, and let accuracy assume a secondary role. It is in this sense that the method is semi-quantiative. However
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b) left- and right-arm movements for a woman who tended to hold both arms/hands tight against her body (Study II), and (c) touching object cues on a computer screen for a girl who rarely used her residual vision for orienting/guiding her hand responses. The technology involved microswitches/sensors to detect the response schemes and a computer/control system to record their occurrences and activate preferred stimuli contingent on them. Results showed large increases in the response schemes targeted for each of the three participants during the intervention phases of the studies. The importance of using technology-based programs as tools for enabling persons with profound and multiple disabilities to practice relevant responses independently was discussed.
NASA Astrophysics Data System (ADS)
Mitty, Todd Jay
In order to compute accurate numerical solutions, it is necessary to maintain a small discretization error. This error may be controlled by adjusting the spacing between discrete points. However, it is preferable to selectively choose locations for such alterations rather than affect the entire domain. This leads to the notion of solution adaptation. A new solution adaptive technique was developed for solving the Euler equations within Delaunay tessellated, three dimensional, computational domains possessing planar boundaries. Mesh generation and solution adaptive mesh enrichment were achieved through application of the Bowyer/Watson algorithm, and resolution criteria were explored to determine the set of adapted points. The example of a reflecting oblique shock wave flow was used to validate the method. The other aspect of this research involved the application of the solution adaption method to supersonic inviscid rotational flows modeled by introducing vorticity through an inlet flow velocity distribution. The extent to which this model represents shock wave/boundary layer interactions was examined by comparing the solution adapted predictions with corresponding experimental data and published viscous computations for a 20 deg fin at nominal Mach numbers of both 3 and 4. The solution adaptation provided well resolved inviscid solutions which were sufficiently accurate to allow testing of the hypothesis that inviscid rotational features dominate the shock wave/boundary layer interaction flows.
NASA Astrophysics Data System (ADS)
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (<35%). Our analysis of the hydrologic alteration revealed the smallest alteration at time steps ranging from 1 to 7 days. However, longer time steps led to higher water supply reliability to meet human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
Zhang Ying; Liang Haozhao; Meng Jie
2009-08-26
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus {sup 12}C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
Botts, Jonathan; Savioja, Lauri
2015-04-01
For time-domain modeling based on the acoustic wave equation, spectral methods have recently demonstrated promise. This letter presents an extension of a spectral domain decomposition approach, previously used to solve the lossless linear wave equation, which accommodates frequency-dependent atmospheric attenuation and assignment of arbitrary dispersion relations. Frequency-dependence is straightforward to assign when time-stepping is done in the spectral domain, so combined losses from molecular relaxation, thermal conductivity, and viscosity can be approximated with little extra computation or storage. A mode update free from numerical dispersion is derived, and the model is confirmed with a numerical experiment.
Kumar, Ravi
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2015-04-01
Nowadays, most hydrological catchment models are designed to allow their use for streamflow simulation at different time-scales. While this permits models to be applied for broader purposes, it can also be a source of error in hydrological processes simulation at catchment scale. Those errors seem not to affect significantly simple conceptual models, but this flexibility may lead to large behavior errors in physically based models. Equations used in processes such as those related to soil moisture time-variation are usually representative at certain time-scales but they may not characterize properly water transfer in soil layers at larger scales. This effect is especially relevant as we move from detailed hourly scale to daily time-step, which are common time scales used at catchment streamflow simulation for different research and management practices purposes. This study aims to provide an objective methodology to identify the degree of similarity of optimal parameter values when hydrological catchment model calibration is developed at different time-scales. Thus, providing information for an informed discussion of physical parameter significance on hydrological models. In this research, we analyze the influence of time scale simulation on: 1) the optimal values of six highly sensitive parameters of the TOPLATS model and 2) the streamflow simulation efficiency, while optimization is carried out at different time scales. TOPLATS (TOPMODEL-based Land-Atmosphere Transfer Scheme) has been applied on its lumped version on three catchments of varying size located in northern Spain. The model has its basis on shallow groundwater gradients (related to local topography) that set up spatial patterns of soil moisture and are assumed to control infiltration and runoff during storm events and evaporation and drainage in between storm events. The model calculates the saturated portion of the catchment at each time step based on Topographical Index (TI) intervals. Surface
ERIC Educational Resources Information Center
Sanchez, Purificacion
2009-01-01
The Bologna Declaration attempts to reform the structure of the higher education system in forty-six European countries in a convergent way. By 2010, the European space for higher education should be completed. In the 2005-2006 academic year, the University of Murcia, Spain, started promoting initiatives to adapt individual modules and entire…
Critical time step for a bilinear laminated composite Mindlin shell element.
Hammerand, Daniel Carl
2004-06-01
The critical time step needed for explicit time integration of laminated shell finite element models is presented. Each layer is restricted to be orthotropic when viewed from a properly oriented material coordinate system. Mindlin shell theory is used in determining the laminated response that includes the effects of transverse shear. The effects of the membrane-bending coupling matrix from the laminate material model are included. Such a coupling matrix arises even in the case of non-symmetric lay-ups of differing isotropic layers. Single point integration is assumed to be used in determining a uniform strain response from the element. Using a technique based upon one from the literature, reduced eigenvalue problems are established to determine the remaining non-zero frequencies. It is shown that the eigenvalue problem arising from the inplane normal and shear stresses is decoupled from that arising from the transverse shear stresses. A verification example is presented where the exact and approximate results are compared.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Extended particle-in-cell schemes for physics in ultrastrong laser fields: Review and developments.
Gonoskov, A; Bastrakov, S; Efimenko, E; Ilderton, A; Marklund, M; Meyerov, I; Muraviev, A; Sergeev, A; Surmin, I; Wallin, E
2015-08-01
We review common extensions of particle-in-cell (PIC) schemes which account for strong field phenomena in laser-plasma interactions. After describing the physical processes of interest and their numerical implementation, we provide solutions for several associated methodological and algorithmic problems. We propose a modified event generator that precisely models the entire spectrum of incoherent particle emission without any low-energy cutoff, and which imposes close to the weakest possible demands on the numerical time step. Based on this, we also develop an adaptive event generator that subdivides the time step for locally resolving QED events, allowing for efficient simulation of cascades. Further, we present a unified technical interface for including the processes of interest in different PIC implementations. Two PIC codes which support this interface, PICADOR and ELMIS, are also briefly reviewed.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
Shou, Guofa; Xia, Ling; Jiang, Mingfeng; Wei, Qing; Liu, Feng; Crozier, Stuart
2009-05-01
The boundary element method (BEM) is a commonly used numerical approach to solve biomedical electromagnetic volume conductor models such as ECG and EEG problems, in which only the interfaces between various tissue regions need to be modeled. The quality of the boundary element discretization affects the accuracy of the numerical solution, and the construction of high-quality meshes is time-consuming and always problem-dependent. Adaptive BEM (aBEM) has been developed and validated as an effective method to tackle such problems in electromagnetic and mechanical fields, but has not been extensively investigated in the ECG problem. In this paper, the h aBEM, which produces refined meshes through adaptive adjustment of the elements' connection, is investigated for the ECG forward problem. Two different refinement schemes: adding one new node (SH1) and adding three new nodes (SH3), are applied for the h aBEM calculation. In order to save the computational time, the h-hierarchical aBEM is also used through the introduction of the h-hierarchical shape functions for SH3. The algorithms were evaluated with a single-layer homogeneous sphere model with assumed dipole sources and a geometrically realistic heart-torso model. The simulations showed that h aBEM can produce better mesh results and is more accurate and effective than the traditional BEM for the ECG problem. While with the same refinement scheme SH3, the h-hierarchical aBEM can save the computational costs about 9% compared to the implementation of standard h aBEM.
NASA Astrophysics Data System (ADS)
Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis
2015-11-01
This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.
Owolabi, Kolade M; Patidar, Kailash C
2016-01-01
In this paper, we consider the numerical simulations of an extended nonlinear form of Kierstead-Slobodkin reaction-transport system in one and two dimensions. We employ the popular fourth-order exponential time differencing Runge-Kutta (ETDRK4) schemes proposed by Cox and Matthew (J Comput Phys 176:430-455, 2002), that was modified by Kassam and Trefethen (SIAM J Sci Comput 26:1214-1233, 2005), for the time integration of spatially discretized partial differential equations. We demonstrate the supremacy of ETDRK4 over the existing exponential time differencing integrators that are of standard approaches and provide timings and error comparison. Numerical results obtained in this paper have granted further insight to the question 'What is the minimal size of the spatial domain so that the population persists?' posed by Kierstead and Slobodkin (J Mar Res 12:141-147, 1953), with a conclusive remark that the population size increases with the size of the domain. In attempt to examine the biological wave phenomena of the solutions, we present the numerical results in both one- and two-dimensional space, which have interesting ecological implications. Initial data and parameter values were chosen to mimic some existing patterns.
NASA Astrophysics Data System (ADS)
Wodo, Olga; Ganapathysubramanian, Baskar
2011-07-01
We present an efficient numerical framework for analyzing spinodal decomposition described by the Cahn-Hilliard equation. We focus on the analysis of various implicit time schemes for two and three dimensional problems. We demonstrate that significant computational gains can be obtained by applying embedded, higher order Runge-Kutta methods in a time adaptive setting. This allows accessing time-scales that vary by five orders of magnitude. In addition, we also formulate a set of test problems that isolate each of the sub-processes involved in spinodal decomposition: interface creation and bulky phase coarsening. We analyze the error fluctuations using these test problems on the split form of the Cahn-Hilliard equation solved using the finite element method with basis functions of different orders. Any scheme that ensures at least four elements per interface satisfactorily captures both sub-processes. Our findings show that linear basis functions have superior error-to-cost properties. This strategy - coupled with a domain decomposition based parallel implementation - let us notably augment the efficiency of a numerical Cahn-Hillard solver, and open new venues for its practical applications, especially when three dimensional problems are considered. We use this framework to address the isoperimetric problem of identifying local solutions in the periodic cube in three dimensions. The framework is able to generate all five hypothesized candidates for the local solution of periodic isoperimetric problem in 3D - sphere, cylinder, lamella, doubly periodic surface with genus two (Lawson surface) and triply periodic minimal surface (P Schwarz surface).
Numerical investigation of BB-AMR scheme using entropy production as refinement criterion
NASA Astrophysics Data System (ADS)
Altazin, Thomas; Ersoy, Mehmet; Golay, Frédéric; Sous, Damien; Yushchenko, Lyudmyla
2016-03-01
In this work, a parallel finite volume scheme on unstructured meshes is applied to fluid flow for multidimensional hyperbolic system of conservation laws. It is based on a block-based adaptive mesh refinement strategy which allows quick meshing and easy parallelisation. As a continuation and as an extension of a previous work, the useful numerical density of entropy production is used as mesh refinement criterion combined with a local time-stepping method to preserve the computational time. Then, we numerically investigate its efficiency through several test cases with a confrontation with exact solution or experimental data.
ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing
NASA Astrophysics Data System (ADS)
Wise, John H.; Abel, Tom
2011-07-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Adaptive numerical algorithms in space weather modeling
NASA Astrophysics Data System (ADS)
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984
Adaptive Multiresolution or Adaptive Mesh Refinement? A Case Study for 2D Euler Equations
Deiterding, Ralf; Domingues, Margarete O.; Gomes, Sonia M.; Roussel, Olivier; Schneider, Kai
2009-01-01
We present adaptive multiresolution (MR) computations of the two-dimensional compressible Euler equations for a classical Riemann problem. The results are then compared with respect to accuracy and computational efficiency, in terms of CPU time and memory requirements, with the corresponding finite volume scheme on a regular grid. For the same test-case, we also perform computations using adaptive mesh refinement (AMR) imposing similar accuracy requirements. The results thus obtained are compared in terms of computational overhead and compression of the computational grid, using in addition either local or global time stepping strategies. We preliminarily conclude that the multiresolution techniques yield improved memory compression and gain in CPU time with respect to the adaptive mesh refinement method.
The impact of time step definition on code convergence and robustness
NASA Technical Reports Server (NTRS)
Venkateswaran, S.; Weiss, J. M.; Merkle, C. L.
1992-01-01
We have implemented preconditioning for multi-species reacting flows in two independent codes, an implicit (ADI) code developed in-house and the RPLUS code (developed at LeRC). The RPLUS code was modified to work on a four-stage Runge-Kutta scheme. The performance of both the codes was tested, and it was shown that preconditioning can improve convergence by a factor of two to a hundred depending on the problem. Our efforts are currently focused on evaluating the effect of chemical sources and on assessing how preconditioning may be applied to improve convergence and robustness in the calculation of reacting flows.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Coombes, P J; Barry, M E
2007-01-01
The use of domestic rainwater tanks with back up from mains water supplies in urban areas can produce considerable reductions in mains water demands and stormwater runoff. It is commonplace to analyse the performance of rainwater tanks using continuous simulation with daily time steps and average water use assumptions. This paper compares this simplistic analysis to more detailed analysis that employs 6 minute time steps and climate dependent water demand. The use of daily time steps produced considerable under-estimation of annual rainwater yields that were dependent on tank size, rain depth, seasonal distribution of rainfall, water demand and tank configuration. It is shown that analysis of the performance of rainwater tanks is critically dependent on detailed inputs.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
NASA Technical Reports Server (NTRS)
Glocer, A.; Toth, G.; Ma, Y.; Gombosi, T.; Zhang, J.-C.; Kistler, L. M.
2009-01-01
The magnetosphere contains a significant amount of ionospheric O+, particularly during geomagnetically active times. The presence of ionospheric plasma in the magnetosphere has a notable impact on magnetospheric composition and processes. We present a new multifluid MHD version of the Block-Adaptive-Tree Solar wind Roe-type Upwind Scheme model of the magnetosphere to track the fate and consequences of ionospheric outflow. The multifluid MHD equations are presented as are the novel techniques for overcoming the formidable challenges associated with solving them. Our new model is then applied to the May 4, 1998 and March 31, 2001 geomagnetic storms. The results are juxtaposed with traditional single-fluid MHD and multispecies MHD simulations from a previous study, thereby allowing us to assess the benefits of using a more complex model with additional physics. We find that our multifluid MHD model (with outflow) gives comparable results to the multispecies MHD model (with outflow), including a more strongly negative Dst, reduced CPCP, and a drastically improved magnetic field at geosynchronous orbit, as compared to single-fluid MHD with no outflow. Significant differences in composition and magnetic field are found between the multispecies and multifluid approach further away from the Earth. We further demonstrate the ability to explore pressure and bulk velocity differences between H+ and O+, which is not possible when utilizing the other techniques considered
Erni, Daniel; Liebig, Thorsten; Rennings, Andreas; Koster, Norbert H L; Fröhlich, Jürg
2011-01-01
We propose an adaptive RF antenna system for the excitation (and manipulation) of the fundamental circular waveguide mode (TE(11)) in the context of high-field (7T) traveling-wave magnetic resonance imaging (MRI). The system consists of
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
NASA Astrophysics Data System (ADS)
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
TVD schemes for open channel flow
NASA Astrophysics Data System (ADS)
Delis, A. I.; Skeels, C. P.
1998-04-01
The Saint Venant equations for modelling flow in open channels are solved in this paper, using a variety of total variation diminishing (TVD) schemes. The performance of second- and third-order-accurate TVD schemes is investigated for the computation of free-surface flows, in predicting dam-breaks and extreme flow conditions created by the river bed topography. Convergence of the schemes is quantified by comparing error norms between subsequent iterations. Automatically calculated time steps and entropy corrections allow high CFL numbers and smooth transition between different conditions. In order to compare different approaches with TVD schemes, the most accurate of each type was chosen. All four schemes chosen proved acceptably accurate. However, there are important differences between the schemes in the occurrence of clipping, overshooting and oscillating behaviour and in the highest CFL numbers allowed by a scheme. These variations in behaviour stem from the different orders and inherent properties of the four schemes.
NASA Astrophysics Data System (ADS)
Alerskans, Emy; Kaas, Eigil
2016-04-01
In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.
Composite centered schemes for multidimensional conservation laws
Liska, R.; Wendroff, B.
1998-05-08
The oscillations of a centered second order finite difference scheme and the excessive diffusion of a first order centered scheme can be overcome by global composition of the two, that is by performing cycles consisting of several time steps of the second order method followed by one step of the diffusive method. The authors show the effectiveness of this approach on some test problems in two and three dimensions.
NASA Astrophysics Data System (ADS)
Anderson, Robert; Pember, Richard; Elliott, Noah
2001-11-01
We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.
Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik
2015-12-01
We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor.
Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik
2015-12-01
We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor. PMID:26742139
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering
Time-stepping methods for the simulation of the self-assembly of nano-crystals in MATLAB on a GPU
NASA Astrophysics Data System (ADS)
Korzec, M. D.; Ahnert, T.
2013-10-01
Partial differential equations describing the patterning of thin crystalline films are typically of fourth or sixth order, they are quasi- or semilinear and they are mostly defined on simple geometries such as rectangular domains. For the numerical simulation of these kinds of problems spectral methods are an efficient approach. We apply several implicit-explicit schemes to one recently derived PDE that we express in terms of coefficients of trigonometric interpolants. While the simplest IMEX scheme turns out to have the mildest step-size restriction, higher order SBDF schemes tend to be more unstable and exponential time integrators are fastest for the calculation of very accurate solutions. We implemented a reduced model in the EXPINT package syntax [3] and compared various exponential schemes. A convexity splitting approach was employed to stabilize the SBDF1 scheme. We show that accuracy control is crucial when using this idea, therefore we present a time-adaptive SBDF1/SBDF1-2-step method that yields convincing results reflecting the change in timescales during topological changes of the nanostructures. The implementation of all presented methods is carried out in MATLAB. We used the open source GPUmat package to gain up to 5-fold runtime benefits by carrying out calculations on a low-cost GPU without having to prescribe any knowledge in low-level programming or CUDA implementations and found comparable speedups as with MATLAB's PCT or with GPUmat run on Octave.
NASA Technical Reports Server (NTRS)
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
Simulation of rapidly varying flow using an efficient TVD-MacCormack scheme
NASA Astrophysics Data System (ADS)
Liang, Dongfang; Lin, Binliang; Falconer, Roger A.
2007-02-01
An efficient numerical scheme is outlined for solving the SWEs (shallow water equations) in environmental flow; this scheme includes the addition of a five-point symmetric total variation diminishing (TVD) term to the corrector step of the standard MacCormack scheme. The paper shows that the discretization of the conservative and non-conservative forms of the SWEs leads to the same finite difference scheme when the source term is discretized in a certain way. The non-conservative form is used in the solution outlined herein, since this formulation is simpler and more efficient. The time step is determined adaptively, based on the maximum instantaneous Courant number across the domain. The bed friction is included either explicitly or implicitly in the computational algorithm according to the local water depth. The wetting and drying process is simulated in a manner which complements the use of operator-splitting and two-stage numerical schemes. The numerical model was then applied to a hypothetical dam-break scenario, an experimental dam-break case and an extreme flooding event over the Toce River valley physical model. The predicted results are free of spurious oscillations for both sub- and super-critical flows, and the predictions compare favourably with the experimental measurements.
NASA Astrophysics Data System (ADS)
Önskog, Thomas; Zhang, Jun
2015-12-01
In this paper, we present a stochastic particle algorithm for the simulation of flows of wall-confined gases with diffuse reflection boundary conditions. Based on the theoretical observation that the change in location of the particles consists of a deterministic part and a Wiener process if the time scale is much larger than the relaxation time, a new estimate for the first hitting time at the boundary is obtained. This estimate facilitates the construction of an algorithm with large time steps for wall-confined flows. Numerical simulations verify that the proposed algorithm reproduces the correct boundary behaviour.
Nonlinear wave propagation using three different finite difference schemes (category 2 application)
NASA Technical Reports Server (NTRS)
Pope, D. Stuart; Hardin, J. C.
1995-01-01
Three common finite difference schemes are used to examine the computation of one-dimensional nonlinear wave propagation. The schemes are studied for their responses to numerical parameters such as time step selection, boundary condition implementation, and discretization of governing equations. The performance of the schemes is compared and various numerical phenomena peculiar to each is discussed.
NASA Astrophysics Data System (ADS)
Zhou, Ruhong; Harder, Edward; Xu, Huafeng; Berne, B. J.
2001-08-01
The particle-particle particle-mesh (P3M) method for calculating long-range electrostatic forces in molecular simulations is modified and combined with the reversible reference system propagator algorithm (RESPA) for treating the multiple time scale problems in the molecular dynamics of complex systems with multiple time scales and long-range forces. The resulting particle-particle particle-mesh Ewald RESPA (P3ME/RESPA) method provides a fast and accurate representation of the long-range electrostatic interactions for biomolecular systems such as protein solutions. The method presented here uses a different breakup of the electrostatic forces than was used by other authors when they combined the Particle Mesh Ewald method with RESPA. The usual breakup is inefficient because it treats the reciprocal space forces in an outer loop even though they contain a part that changes rapidly in time. This does not allow use of a large time step for the outer loop. Here, we capture the short-range contributions in the reciprocal space forces and include them in the inner loop, thereby allowing for larger outer loop time steps and thus for a much more efficient RESPA implementation. The new approach has been applied to both regular Ewald and P3ME. The timings of Ewald/RESPA and P3ME/RESPA are compared in detail with the previous approach for protein water solutions as a function of number of atoms in the system, and significant speedups are reported.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Ruiz-Garbajosa, Patricia; Bonten, Marc J M; Robinson, D Ashley; Top, Janetta; Nallapareddy, Sreedhar R; Torres, Carmen; Coque, Teresa M; Cantón, Rafael; Baquero, Fernando; Murray, Barbara E; del Campo, Rosa; Willems, Rob J L
2006-06-01
A multilocus sequence typing (MLST) scheme based on seven housekeeping genes was used to investigate the epidemiology and population structure of Enterococcus faecalis. MLST of 110 isolates from different sources and geographic locations revealed 55 different sequence types that grouped into four major clonal complexes (CC2, CC9, CC10, and CC21) by use of eBURST. Two of these clonal complexes, CC2 and CC9, are particularly fit in the hospital environment, as CC2 includes the previously described BVE clonal complex identified by an alternative MLST scheme and CC9 includes exclusively isolates from hospitalized patients. Identical alleles were found in genetically diverse isolates with no linkage disequilibrium, while the different MLST loci gave incongruent phylogenetic trees. This demonstrates that recombination is an important mechanism driving genetic variation in E. faecalis and suggests an epidemic population structure for E. faecalis. Our novel MLST scheme provides an excellent tool for investigating local and short-term epidemiology as well as global epidemiology, population structure, and genetic evolution of E. faecalis.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3
Divergence-Free Adaptive Mesh Refinement for Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2001-12-01
Several physical systems, such as nonrelativistic and relativistic magnetohydrodynamics (MHD), radiation MHD, electromagnetics, and incompressible hydrodynamics, satisfy Stoke's law type equations for the divergence-free evolution of vector fields. In this paper we present a full-fledged scheme for the second-order accurate, divergence-free evolution of vector fields on an adaptive mesh refinement (AMR) hierarchy. We focus here on adaptive mesh MHD. However, the scheme has applicability to the other systems of equations mentioned above. The scheme is based on making a significant advance in the divergence-free reconstruction of vector fields. In that sense, it complements the earlier work of D. S. Balsara and D. S. Spicer (1999, J. Comput. Phys. 7, 270) where we discussed the divergence-free time-update of vector fields which satisfy Stoke's law type evolution equations. Our advance in divergence-free reconstruction of vector fields is such that it reduces to the total variation diminishing (TVD) property for one-dimensional evolution and yet goes beyond it in multiple dimensions. For that reason, it is extremely suitable for the construction of higher order Godunov schemes for MHD. Both the two-dimensional and three-dimensional reconstruction strategies are developed. A slight extension of the divergence-free reconstruction procedure yields a divergence-free prolongation strategy for prolonging magnetic fields on AMR hierarchies. Divergence-free restriction is also discussed. Because our work is based on an integral formulation, divergence-free restriction and prolongation can be carried out on AMR meshes with any integral refinement ratio, though we specialize the expressions for the most popular situation where the refinement ratio is two. Furthermore, we pay attention to the fact that in order to efficiently evolve the MHD equations on AMR hierarchies, the refined meshes must evolve in time with time steps that are a fraction of their parent mesh's time step
Datta, Dipayan Gauss, Jürgen
2015-07-07
We report analytical calculations of isotropic hyperfine-coupling constants in radicals using a spin-adapted open-shell coupled-cluster theory, namely, the unitary group based combinatoric open-shell coupled-cluster (COSCC) approach within the singles and doubles approximation. A scheme for the evaluation of the one-particle spin-density matrix required in these calculations is outlined within the spin-free formulation of the COSCC approach. In this scheme, the one-particle spin-density matrix for an open-shell state with spin S and M{sub S} = + S is expressed in terms of the one- and two-particle spin-free (charge) density matrices obtained from the Lagrangian formulation that is used for calculating the analytic first derivatives of the energy. Benchmark calculations are presented for NO, NCO, CH{sub 2}CN, and two conjugated π-radicals, viz., allyl and 1-pyrrolyl in order to demonstrate the performance of the proposed scheme.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
On the Dynamics of TVD Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Kutler, Paul (Technical Monitor)
1994-01-01
The dynamics of a class of TVD schemes for model hyperbolic and parabolic equations is studied numerically using a highly parallel supercomputer (CM-5). The objective is to utilize the highly parallel property of the CM-5 to reveal the reliable time step and entropy parameter ranges, and the degree of compressible flux limiters to avoid slow convergence and the production of nonphysical numerical solutions. We choose to study the nonlinear stability property of TVD schemes numerically since it is otherwise not amenable analytically.
ERIC Educational Resources Information Center
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
A composite scheme for gas dynamics in Lagrangian coordinates
Shashkov, M.; Wendroff, B.
1999-04-10
One cycle of a composite finite difference scheme is defined as several time steps of an oscillatory scheme such as Lax-Wendroff followed by one step of a diffusive scheme such as Lax-Friedrichs. The authors apply this idea to gas dynamics in Lagrangian coordinates. They show numerical results in two dimensions for Noh`s infinite strength shock problem and the Sedov blast wave problem, and for several one-dimensional problems including a Riemann problem with a contact discontinuity. For Noh`s problem the composite scheme produces a better result than that obtained with a more conventional Lagrangian code.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
NASA Astrophysics Data System (ADS)
Lambers, James V.
2016-06-01
The stiffness of systems of ODEs that arise from spatial discretization of PDEs causes difficulties for both explicit and implicit time-stepping methods. Krylov Subspace Spectral (KSS) methods present a balance between the efficiency of explicit methods and the stability of implicit methods by computing each Fourier coefficient from an individualized approximation of the solution operator of the PDE. While KSS methods are explicit methods that exhibit a high order of accuracy and stability similar to that of implicit methods, their efficiency needs to be improved. Here, a detailed asymptotic study is performed in order to rapidly estimate all nodes, thus drastically reducing computational expense without sacrificing accuracy. Extension to PDEs on a disk, through expansions built on Legendre polynomials, is also discussed. Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff nonlinear systems of ODE, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this talk, it is proposed to modify EPI methods by using KSS methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. It is also demonstrated that the convergence of Krylov projection can be significantly accelerated, without noticeable loss of accuracy, through filtering techniques, thus improving performance and scalability even further.
Lefrancois, Daniel; Wormit, Michael; Dreuw, Andreas
2015-09-28
For the investigation of molecular systems with electronic ground states exhibiting multi-reference character, a spin-flip (SF) version of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to third order perturbation theory (SF-ADC(3)) is derived via the intermediate state representation and implemented into our existing ADC computer program adcman. The accuracy of these new SF-ADC(n) approaches is tested on typical situations, in which the ground state acquires multi-reference character, like bond breaking of H{sub 2} and HF, the torsional motion of ethylene, and the excited states of rectangular and square-planar cyclobutadiene. Overall, the results of SF-ADC(n) reveal an accurate description of these systems in comparison with standard multi-reference methods. Thus, the spin-flip versions of ADC are easy-to-use methods for the calculation of “few-reference” systems, which possess a stable single-reference triplet ground state.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and
Froehle, Bradley Persson, Per-Olof
2014-09-01
We present a high-order accurate scheme for coupled fluid–structure interaction problems. The fluid is discretized using a discontinuous Galerkin method on unstructured tetrahedral meshes, and the structure uses a high-order volumetric continuous Galerkin finite element method. Standard radial basis functions are used for the mesh deformation. The time integration is performed using a partitioned approach based on implicit–explicit Runge–Kutta methods. The resulting scheme fully decouples the implicit solution procedures for the fluid and the solid parts, which we perform using two separate efficient parallel solvers. We demonstrate up to fifth order accuracy in time on a non-trivial test problem, on which we also show that additional subiterations are not required. We solve a benchmark problem of a cantilever beam in a shedding flow, and show good agreement with other results in the literature. Finally, we solve for the flow around a thin membrane at a high angle of attack in both 2D and 3D, and compare with the results obtained with a rigid plate.
NASA Astrophysics Data System (ADS)
Pecha, Petr; Pechova, Emilie
2014-06-01
This article focuses on derivation of an effective algorithm for the fast estimation of cloudshine doses/dose rates induced by a large mixture of radionuclides discharged into the atmosphere. A certain special modification of the classical Gaussian plume approach is proposed for approximation of the near-field dispersion problem. Specifically, the accidental radioactivity release is subdivided into consecutive one-hour Gaussian segments, each driven by a short-term meteorological forecast for the respective hours. Determination of the physical quantity of photon fluence rate from an ambient cloud irradiation is coupled to a special decomposition of the Gaussian plume shape into the equivalent virtual elliptic disks. It facilitates solution of the formerly used time-consuming 3-D integration and provides advantages with regard to acceleration of the computational process on a local scale. An optimal choice of integration limit is adopted on the basis of the mean free path of γ-photons in the air. An efficient approach is introduced for treatment of a wide range of energetic spectrum of the emitted photons when the usual multi-nuclide approach is replaced by a new multi-group scheme. The algorithm is capable of generating the radiological responses in a large net of spatial nodes. It predetermines the proposed procedure such as a proper tool for online data assimilation analysis in the near-field areas. A specific technique for numerical integration is verified on the basis of comparison with a partial analytical solution. Convergence of the finite cloud approximation to the tabulated semi-infinite cloud values for dose conversion factors was validated.
NASA Astrophysics Data System (ADS)
Paoli, L.
2010-11-01
We consider a discrete mechanical system with a non-trivial mass matrix, subjected to perfect unilateral constraints described by the geometrical inequalities {f_{α} (q) ≥q 0, α in \\{1, dots, ν\\} (ν ≥q 1)}. We assume that the transmission of the velocities at impact is governed by Newton’s Law with a coefficient of restitution e = 0 (so that the impact is inelastic). We propose a time-discretization of the second order differential inclusion describing the dynamics, which generalizes the scheme proposed in Paoli (J Differ Equ 211:247-281, 2005) and, for any admissible data, we prove the convergence of approximate motions to a solution of the initial-value problem.
Adaptive Pairing Reversible Watermarking.
Dragoi, Ioan-Catalin; Coltuc, Dinu
2016-05-01
This letter revisits the pairwise reversible watermarking scheme of Ou et al., 2013. An adaptive pixel pairing that considers only pixels with similar prediction errors is introduced. This adaptive approach provides an increased number of pixel pairs where both pixels are embedded and decreases the number of shifted pixels. The adaptive pairwise reversible watermarking outperforms the state-of-the-art low embedding bit-rate schemes proposed so far.
Matching multistage schemes to viscous flow
NASA Astrophysics Data System (ADS)
Kleb, William Leonard
A method to accelerate convergence to steady state by explicit time-marching schemes for the compressible Navier-Stokes equations is presented. The combination of cell-Reynolds-number-based multistage time stepping and local preconditioning makes solving steady-state viscous flow problems competitive with the convergence rates typically associated with implicit methods, without the associated memory penalty. Initially, various methods are investigated to extend the range of multistage schemes to diffusion-dominated cases. It is determined that the Chebyshev polynomials are well suited to serve as amplification factors for these schemes; however, creating a method that can bridge the continuum from convection-dominated to diffusion-dominated regimes proves troublesome, until the Manteuffel family of polynomials is uncovered. This transformation provides a smooth transition between the two extremes; and armed with this information, sets of multistage coefficients are created for a given spatial discretization as a function of cell Reynolds number according to various design criteria. As part of this process, a precise definition for the numerical time step is hammered out, something which up to this time, has been set via algebraic arguments only. Next are numerical tests of these sets of variable multistage coefficients. To isolate the effects of the variable multistage coefficients, the test case chosen is very simple: circular advection-diffusion. The numerical results support the analytical analysis by demonstrating an order of magnitude improvement in convergence rate for single-grid relaxation and a factor of three for multigrid relaxation. Building upon the success of the scalar case, preconditioning is applied to make the Navier-Stokes system of equations behave more nearly as a single scalar equation. Then, by applying the variable multistage coefficient scheme to a typical boundary-layer flow problem, the results affirm the benefits of local preconditioning
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
A second-order characteristic line scheme for solving a juvenile-adult model of amphibians.
Deng, Keng; Wang, Yi
2015-01-01
In this paper, we develop a second-order characteristic line scheme for a nonlinear hierarchical juvenile-adult population model of amphibians. The idea of the scheme is not to follow the characteristics from the initial data, but for each time step to find the origins of the grid nodes at the previous time level. Numerical examples are presented to demonstrate the accuracy of the scheme and its capability to handle solutions with singularity.
Building a better leapfrog. [an algorithm for ensuring time symmetry in any integration scheme
NASA Technical Reports Server (NTRS)
Hut, Piet; Makino, Jun; Mcmillan, Steve
1995-01-01
In stellar dynamical computer simulations, as well as other types of simulations using particles, time step size is often held constant in order to guarantee a high degree of energy conservation. In many applications, allowing the time step size to change in time can offer a great saving in computational cost, but variable-size time steps usually imply a substantial degradation in energy conservation. We present a meta-algorithm' for choosing time steps in such a way as to guarantee time symmetry in any integration scheme, thus allowing vastly improved energy conservation for orbital calculations with variable time steps. We apply the algorithm to the familiar leapfrog scheme, and generalize to higher order integration schemes, showing how the stability properties of the fixed-step leapfrog scheme can be extended to higher order, variable-step integrators such as the Hermite method. We illustrate the remarkable properties of these time-symmetric integrators for the case of a highly eccentric elliptical Kepler orbit and discuss applications to more complex problems.
Compact integration factor methods for complex domains and adaptive mesh refinement.
Liu, Xinfeng; Nie, Qing
2010-08-10
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.
Compact integration factor methods for complex domains and adaptive mesh refinement
Liu, Xinfeng; Nie, Qing
2010-01-01
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m^{2}) and longwave cloud forcing (~5 W/m^{2}) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.
Comparison of Several Dissipation Algorithms for Central Difference Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Radespiel, R.; Turkel, E.
1997-01-01
Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.
A stable scheme for a nonlinear, multiphase tumor growth model with an elastic membrane.
Chen, Ying; Wise, Steven M; Shenoy, Vivek B; Lowengrub, John S
2014-07-01
In this paper, we extend the 3D multispecies diffuse-interface model of the tumor growth, which was derived in Wise et al. (Three-dimensional multispecies nonlinear tumor growth-I: model and numerical method, J. Theor. Biol. 253 (2008) 524-543), and incorporate the effect of a stiff membrane to model tumor growth in a confined microenvironment. We then develop accurate and efficient numerical methods to solve the model. When the membrane is endowed with a surface energy, the model is variational, and the numerical scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is demonstrably shown to be energy stable. Namely, in the absence of cell proliferation and death, the discrete energy is a nonincreasing function of time for any time and space steps. When a simplified model of membrane elastic energy is used, the resulting model is derived analogously to the surface energy case. However, the elastic energy model is actually nonvariational because certain coupling terms are neglected. Nevertheless, a very stable numerical scheme is developed following the strategy used in the surface energy case. 2D and 3D simulations are performed that demonstrate the accuracy of the algorithm and illustrate the shape instabilities and nonlinear effects of membrane elastic forces that may resist or enhance growth of the tumor. Compared with the standard Crank-Nicholson method, the time step can be up to 25 times larger using the new approach.
A stable scheme for a nonlinear, multiphase tumor growth model with an elastic membrane
Chen, Ying; Wise, Steven M.; Shenoy, Vivek B.; Lowengrub, John S.
2014-01-01
Summary In this paper, we extend the 3D multispecies diffuse-interface model of the tumor growth, which was derived in Wise et al. (Three-dimensional multispecies nonlinear tumor growth-I: model and numerical method, J. Theor. Biol. 253 (2008) 524–543), and incorporate the effect of a stiff membrane to model tumor growth in a confined microenvironment. We then develop accurate and efficient numerical methods to solve the model. When the membrane is endowed with a surface energy, the model is variational, and the numerical scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is demonstrably shown to be energy stable. Namely, in the absence of cell proliferation and death, the discrete energy is a nonincreasing function of time for any time and space steps. When a simplified model of membrane elastic energy is used, the resulting model is derived analogously to the surface energy case. However, the elastic energy model is actually nonvariational because certain coupling terms are neglected. Nevertheless, a very stable numerical scheme is developed following the strategy used in the surface energy case. 2D and 3D simulations are performed that demonstrate the accuracy of the algorithm and illustrate the shape instabilities and nonlinear effects of membrane elastic forces that may resist or enhance growth of the tumor. Compared with the standard Crank–Nicholson method, the time step can be up to 25 times larger using the new approach. PMID:24443369
Designing Adaptive Low Dissipative High Order Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)
2002-01-01
Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.
The GEMPAK Barnes objective analysis scheme
NASA Technical Reports Server (NTRS)
Koch, S. E.; Desjardins, M.; Kocin, P. J.
1981-01-01
GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.
Method For Model-Reference Adaptive Control
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1990-01-01
Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.
Stability analysis of intermediate boundary conditions in approximate factorization schemes
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Hafez, M. M.; Gottlieb, D.
1986-01-01
The paper discusses the role of the intermediate boundary condition in the AF2 scheme used by Holst for simulation of the transonic full potential equation. It is shown that the treatment suggested by Holst led to a restriction on the time step and ways to overcome this restriction are suggested. The discussion is based on the theory developed by Gustafsson, Kreiss, and Sundstrom and also on the von Neumann method.
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
On Tenth Order Central Spatial Schemes
Sjogreen, B; Yee, H C
2007-05-14
This paper explores the performance of the tenth-order central spatial scheme and derives the accompanying energy-norm stable summation-by-parts (SBP) boundary operators. The objective is to employ the resulting tenth-order spatial differencing with the stable SBP boundary operators as a base scheme in the framework of adaptive numerical dissipation control in high order multistep filter schemes of Yee et al. (1999), Yee and Sj{umlt o}green (2002, 2005, 2006, 2007), and Sj{umlt o}green and Yee (2004). These schemes were designed for multiscale turbulence flows including strong shock waves and combustion.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
Parallel level-set methods on adaptive tree-based grids
NASA Astrophysics Data System (ADS)
Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic
2016-10-01
We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
Nonlinear Secret Image Sharing Scheme
Shin, Sang-Ho; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
Generalized formulation of a class of explicit and implicit TVD schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.
1985-01-01
A one parameter family of second order explicit and implicit total variation diminishing (TVD) schemes is reformulated so that a simpler and wider group of limiters is included. The resulting scheme can be viewed as a symmetrical algorithm with a variety of numerical dissipation terms that are designed for weak solutions of hyperbolic problems. This is a generalization of Roe and Davis's recent works to a wider class of symmetric schemes other than Lax-Wendroff. The main properties of the present class of schemes are that they can be implicit, and when steady state calculations are sought, the numerical solution is independent of the time step.
Progress with multigrid schemes for hypersonic flow problems
NASA Technical Reports Server (NTRS)
Radespiel, R.; Swanson, R. C.
1991-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.
Progress with multigrid schemes for hypersonic flow problems
Radespiel, R.; Swanson, R.C.
1995-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10{sup 6} and Mach numbers up to 25. 32 refs., 31 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Jeng, Yih Nen; Payne, Uon Jan
1995-05-01
An adaptive TVD limiter, based on a limiter approximating the upper boundary of the TVD range and that of the third-order upwind TVD scheme, is developed in this work. The limiter switches to the comprressive limiter near a discontinuity, to the third-order TVD scheme's limiter in the smooth region, and to a weighted averaged scheme in the transition region between smooth and high gradient solutions. Numerical experiments show that the proposed scheme works very well for one-dimensional scalar equation problems but becomes less effective in one- and two-dimensional Euler equation problems. Further study is required for the two-dimensional scalar equation problems.
Patel, N.R.; Sturek, W.B.; Hiromoto, R.
1989-01-01
Parallel Navier-Stokes codes are developed to solve both two- dimensional and three-dimensional flow fields in and around ramjet and nose tip configurations. A multi-zone overlapped grid technique is used to extend an explicit finite-difference method to more complicated geometries. Parallel implementations are developed for execution on both distributed and common-memory multiprocessor architectures. For the steady-state solutions, the use of the local time-step method has the inherent advantage of reducing the communications overhead commonly incurred by parallel implementations. Computational results of the codes are given for a series of test problems. The parallel partitioning of computational zones is also discussed. 5 refs., 18 figs.
The basic function scheme of polynomial type
WU, Wang-yi; Lin, Guang
2009-12-01
A new numerical method---Basic Function Method is proposed. This method can directly discrete differential operator on unstructured grids. By using the expansion of basic function to approach the exact function, the central and upwind schemes of derivative are constructed. By using the second-order polynomial as basic function and applying the technique of flux splitting method and the combination of central and upwind schemes to suppress the non-physical fluctuation near the shock wave, the second-order basic function scheme of polynomial type for solving inviscid compressible flow numerically is constructed in this paper. Several numerical results of many typical examples for two dimensional inviscid compressible transonic and supersonic steady flow illustrate that it is a new scheme with high accuracy and high resolution for shock wave. Especially, combining with the adaptive remeshing technique, the satisfactory results can be obtained by these schemes.
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM
Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.
Wicaksono, D.; Zerkak, O.; Nikitin, K.; Ferroukhi, H.; Chawla, R.
2013-07-01
This paper reports refinement studies on the temporal coupling scheme and time-stepping management of TRACE/S3K, a dynamically coupled code version of the thermal-hydraulics system code TRACE and the 3D core simulator Simulate-3K. The studies were carried out for two test cases, namely a PWR rod ejection accident and the Peach Bottom 2 Turbine Trip Test 2. The solution of the coupled calculation, especially the power peak, proves to be very sensitive to the time-step size with the currently employed conventional operator-splitting. Furthermore, a very small time-step size is necessary to achieve decent accuracy. This degrades the trade-off between accuracy and performance. A simple and computationally cheap implementation of time-projection of power has been shown to be able to improve the convergence of the coupled calculation. This scheme is able to achieve a prescribed accuracy with a larger time-step size. (authors)
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models
NASA Astrophysics Data System (ADS)
Ramli, Huda Mohd.; Esler, J. Gavin
2016-07-01
A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.
An implicit midpoint difference scheme for the fractional Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Wang, Pengde; Huang, Chengming
2016-05-01
This paper proposes and analyzes an efficient difference scheme for the nonlinear complex Ginzburg-Landau equation involving fractional Laplacian. The scheme is based on the implicit midpoint rule for the temporal discretization and a weighted and shifted Grünwald difference operator for the spatial fractional Laplacian. By virtue of a careful analysis of the difference operator, some useful inequalities with respect to suitable fractional Sobolev norms are established. Then the numerical solution is shown to be bounded, and convergent in the lh2 norm with the optimal order O (τ2 +h2) with time step τ and mesh size h. The a priori bound as well as the convergence order holds unconditionally, in the sense that no restriction on the time step τ in terms of the mesh size h needs to be assumed. Numerical tests are performed to validate the theoretical results and effectiveness of the scheme.
An Efficient Variable-Length Data-Compression Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Kiely, Aaron B.
1996-01-01
Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.
Use of finite volume schemes for transition simulation
NASA Technical Reports Server (NTRS)
Fenno, Charles C., Jr.; Hassan, H. A.; Streett, Craig L.
1991-01-01
The use of finite-volume methods in the study of spatially and temporally evolving transitional flows over a flat plate is investigated. Schemes are developed with both central and upwind differencing. The compressible Navier-Stokes equations are solved with a Runge-Kutta time stepping scheme. Disturbances are determined using linear theory and superimposed at the inflow boundary. Time accurate integration is then used to allow temporal and spatial disturbance evolution. Characteristic-based boundary conditions are employed. The requirements of using finite-volume algorithms are studied in detail. Special emphasis is placed on difference schemes, grid resolution, and disturbance amplitudes. Moreover, comparisons are made with linear theory for small amplitude disturbances. Both subsonic and supersonic flows are considered, and it is shown that the locations of branch 1 and branch 2 of the neutral stability curve are well predicted, given sufficient resolution.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
NASA Astrophysics Data System (ADS)
Kraczek, B.; Miller, S. T.; Haber, R. B.; Johnson, D. D.
2010-03-01
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1d×time and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
Kraczek, B. Miller, S.T. Haber, R.B. Johnson, D.D.
2010-03-20
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1dxtime and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals in
Homman, Ahmed-Amine; Maillet, Jean-Bernard; Roussel, Julien; Stoltz, Gabriel
2016-01-14
This work presents new parallelizable numerical schemes for the integration of dissipative particle dynamics with energy conservation. So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small time steps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations. PMID:26772559
NASA Astrophysics Data System (ADS)
Homman, Ahmed-Amine; Maillet, Jean-Bernard; Roussel, Julien; Stoltz, Gabriel
2016-01-01
This work presents new parallelizable numerical schemes for the integration of dissipative particle dynamics with energy conservation. So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small time steps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations.
Jensen, Benjamin D; Wise, Kristopher E; Odegard, Gregory M
2015-08-01
As the sophistication of reactive force fields for molecular modeling continues to increase, their use and applicability has also expanded, sometimes beyond the scope of their original development. Reax Force Field (ReaxFF), for example, was originally developed to model chemical reactions, but is a promising candidate for modeling fracture because of its ability to treat covalent bond cleavage. Performing reliable simulations of a complex process like fracture, however, requires an understanding of the effects that various modeling parameters have on the behavior of the system. This work assesses the effects of time step size, thermostat algorithm and coupling coefficient, and strain rate on the fracture behavior of three carbon-based materials: graphene, diamond, and a carbon nanotube. It is determined that the simulated stress-strain behavior is relatively independent of the thermostat algorithm, so long as coupling coefficients are kept above a certain threshold. Likewise, the stress-strain response of the materials was also independent of the strain rate, if it is kept below a maximum strain rate. Finally, the mechanical properties of the materials predicted by the Chenoweth C/H/O parameterization for ReaxFF are compared with literature values. Some deficiencies in the Chenoweth C/H/O parameterization for predicting mechanical properties of carbon materials are observed.
Yu, Hong-Zhou; Sen, Jin; Di, Xue-Ying
2013-06-01
By using the equilibrium moisture content-time lag methods of Nelson and Simard and the meteorological element regression method, this paper studied the dynamics of the moisture content of ground surface fine dead fuels under a Larix gmelinii stand on the sunny slope in Daxing' anling with a time interval of one hour, established the corresponding prediction models, and analyzed the prediction errors under different understory densities. The results showed that the prediction methods of the fuels moisture content based on one-hour time step were applicable for the typical Larix gmelinii stand in Daxing' anling. The mean absolute error and the mean relative error of Simard method was 1.1% and 8.5%, respectively, being lower than those of Nelson method and meteorological element regression method, and close to those of similar studies. On the same slopes and slope positions, the fuel moisture content varied with different understory densities, and thus, it would be necessary to select the appropriate equilibrium moisture content model for specific regional stand and position, or establish the fuel moisture content model based on specific stand when the dynamics of fuel moisture content would be investigated with a time interval of one hour.
NASA Astrophysics Data System (ADS)
Lee, Sanghyun; Salgado, Abner J.
2016-09-01
We present a stability analysis for two different rotational pressure correction schemes with open and traction boundary conditions. First, we provide a stability analysis for a rotational version of the grad-div stabilized scheme of [A. Bonito, J.-L. Guermond, and S. Lee. Modified pressure-correction projection methods: Open boundary and variable time stepping. In Numerical Mathematics and Advanced Applications - ENUMATH 2013, volume 103 of Lecture Notes in Computational Science and Engineering, pages 623-631. Springer, 2015]. This scheme turns out to be unconditionally stable, provided the stabilization parameter is suitably chosen. We also establish a conditional stability result for the boundary correction scheme presented in [E. Bansch. A finite element pressure correction scheme for the Navier-Stokes equations with traction boundary condition. Comput. Methods Appl. Mech. Engrg., 279:198-211, 2014]. These results are shown by employing the equivalence between stabilized gauge Uzawa methods and rotational pressure correction schemes with traction boundary conditions.
NASA Astrophysics Data System (ADS)
Han, Daozhi; Wang, Xiaoming
2015-06-01
We propose a novel second order in time numerical scheme for Cahn-Hilliard-Navier-Stokes phase field model with matched density. The scheme is based on second order convex-splitting for the Cahn-Hilliard equation and pressure-projection for the Navier-Stokes equation. We show that the scheme is mass-conservative, satisfies a modified energy law and is therefore unconditionally stable. Moreover, we prove that the scheme is unconditionally uniquely solvable at each time step by exploring the monotonicity associated with the scheme. Thanks to the simple coupling of the scheme, we design an efficient Picard iteration procedure to further decouple the computation of Cahn-Hilliard equation and Navier-Stokes equation. We implement the scheme by the mixed finite element method. Ample numerical experiments are performed to validate the accuracy and efficiency of the numerical scheme.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. PMID:27639239
Central difference TVD and TVB schemes for time dependent and steady state problems
NASA Technical Reports Server (NTRS)
Jorgenson, P.; Turkel, E.
1992-01-01
We use central differences to solve the time dependent Euler equations. The schemes are all advanced using a Runge-Kutta formula in time. Near shocks, a second difference is added as an artificial viscosity. This reduces the scheme to a first order upwind scheme at shocks. The switch that is used guarantees that the scheme is locally total variation diminishing (TVD). For steady state problems it is usually advantageous to relax this condition. Then small oscillations do not activate the switches and the convergence to a steady state is improved. To sharpen the shocks, different coefficients are needed for different equations and so a matrix valued dissipation is introduced and compared with the scalar viscosity. The connection between this artificial viscosity and flux limiters is shown. Any flux limiter can be used as the basis of a shock detector for an artificial viscosity. We compare the use of the van Leer, van Albada, mimmod, superbee, and the 'average' flux limiters for this central difference scheme. For time dependent problems, we need to use a small enough time step so that the CFL was less than one even though the scheme was linearly stable for larger time steps. Using a total variation bounded (TVB) Runge-Kutta scheme yields minor improvements in the accuracy.
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Ranking Schemes in Hybrid Boolean Systems: A New Approach.
ERIC Educational Resources Information Center
Savoy, Jacques
1997-01-01
Suggests a new ranking scheme especially adapted for hypertext environments in order to produce more effective retrieval results and still use Boolean search strategies. Topics include Boolean ranking schemes; single-term indexing and term weighting; fuzzy set theory extension; and citation indexing. (64 references) (Author/LRW)
Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods
NASA Astrophysics Data System (ADS)
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2013-04-01
that we use for this study, the velocity field is discretised using second order triangular elements, which gives second order accuracy of interpolation from grid-nodes to markers. A fourth order Runge-Kutta solver is used to compute marker-trajectories. We reevaluate the velocity field for each of the intermediate steps of the ODE-solver, rendering our advection scheme to be fourth-order accurate in time. We compare two different approaches for performing the thermal diffusion step. In the first, more conventional approach, the energy equation is solved on a static grid. For this grid, we use first-order triangular elements and a higher resolution than for the velocity-grid, to compensate for the lower order elements. The temperature field is transferred between grid-nodes and markers, and a subgrid diffusion correction step (Gerya and Yuen, 2003) is included to account for the different spatial resolutions of the markers and the grid. In the second approach, the energy equation is solved directly on markers. To do this, we compute a constrained Delaunay triangulation, with markers as nodes, at every time step. We wish to resolve the large range of spatial scales of the solution at lowest possible computational cost. In several existing codes this is achieved with dynamically adaptive meshes, which use high resolution in regions with high solution gradients, and vice versa. The numerical scheme used in this study can be extended to include a similar feature, by regenerating the thermal and mechanical grids in the course of computation, adapting them to the temperature and chemistry fields carried by the markers. We present the results of thermochemical convection simulations obtained using the schemes outlined above, as well as the results of the numerical benchmarks commonly used in the geodynamics community. The quality of the solutions, as well as the computational cost of our schemes, are discussed.
NASA Astrophysics Data System (ADS)
Qiu, Zhongfeng; Doglioli, Andrea M.; He, Yijun; Carlotti, Francois
2011-03-01
This paper presents two comparisons or tests for a Lagrangian model of zooplankton dispersion: numerical schemes and time steps. Firstly, we compared three numerical schemes using idealized circulations. Results show that the precisions of the advanced Adams-Bashfold-Moulton (ABM) method and the Runge-Kutta (RK) method were in the same order and both were much higher than that of the Euler method. Furthermore, the advanced ABM method is more efficient than the RK method in computational memory requirements and time consumption. We therefore chose the advanced ABM method as the Lagrangian particle-tracking algorithm. Secondly, we performed a sensitivity test for time steps, using outputs of the hydrodynamic model, Symphonie. Results show that the time step choices depend on the fluid response time that is related to the spatial resolution of velocity fields. The method introduced by Oliveira et al. in 2002 is suitable for choosing time steps of Lagrangian particle-tracking models, at least when only considering advection.
Identification Schemes from Key Encapsulation Mechanisms
NASA Astrophysics Data System (ADS)
Anada, Hiroaki; Arita, Seiko
We propose a generic conversion from a key encapsulation mechanism (KEM) to an identification (ID) scheme. The conversion derives the security for ID schemes against concurrent man-in-the-middle (cMiM) attacks from the security for KEMs against adaptive chosen ciphertext attacks on one-wayness (one-way-CCA2). Then, regarding the derivation as a design principle of ID schemes, we develop a series of concrete one-way-CCA2 secure KEMs. We start with El Gamal KEM and prove it secure against non-adaptive chosen ciphertext attacks on one-wayness (one-way-CCA1) in the standard model. Then, we apply a tag framework with the algebraic trick of Boneh and Boyen to make it one-way-CCA2 secure based on the Gap-CDH assumption. Next, we apply the CHK transformation or a target collision resistant hash function to exit the tag framework. And finally, as it is better to rely on the CDH assumption rather than the Gap-CDH assumption, we apply the Twin DH technique of Cash, Kiltz and Shoup. The application is not “black box” and we do it by making the Twin DH technique compatible with the algebraic trick. The ID schemes obtained from our KEMs show the highest performance in both computational amount and message length compared with previously known ID schemes secure against concurrent man-in-the-middle attacks.
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Analysis of triangular C-grid finite volume scheme for shallow water flows
NASA Astrophysics Data System (ADS)
Shirkhani, Hamidreza; Mohammadian, Abdolmajid; Seidou, Ousmane; Qiblawey, Hazim
2015-08-01
In this paper, a dispersion relation analysis is employed to investigate the finite volume triangular C-grid formulation for two-dimensional shallow-water equations. In addition, two proposed combinations of time-stepping methods with the C-grid spatial discretization are investigated. In the first part of this study, the C-grid spatial discretization scheme is assessed, and in the second part, fully discrete schemes are analyzed. Analysis of the semi-discretized scheme (i.e. only spatial discretization) shows that there is no damping associated with the spatial C-grid scheme, and its phase speed behavior is also acceptable for long and intermediate waves. The analytical dispersion analysis after considering the effect of time discretization shows that the Leap-Frog time stepping technique can improve the phase speed behavior of the numerical method; however it could not damp the shorter decelerated waves. The Adams-Bashforth technique leads to slower propagation of short and intermediate waves and it damps those waves with a slower propagating speed. The numerical solutions of various test problems also conform and are in good agreement with the analytical dispersion analysis. They also indicate that the Adams-Bashforth scheme exhibits faster convergence and more accurate results, respectively, when the spatial and temporal step size decreases. However, the Leap-Frog scheme is more stable with higher CFL numbers.
AMR vs High Order Schemes Wavelets as a Guide
Jameson, L.
2000-10-04
The final goal behind any numerical method is give the smallest wall-clock time for a given final time error or, conversely, the smallest run-time error for a given wall clock time, etc. Here a comparison will be given between adaptive mesh refinement schemes and non-adaptive schemes of higher order. It will be shown that in three dimension calculations that in order for AMR schemes to be competitive that the finest scale must be restricted to an extremely, and unrealistic, small percentage of the computational domain.
Recent progress on essentially non-oscillatory shock capturing schemes
NASA Technical Reports Server (NTRS)
Osher, Stanley; Shu, Chi-Wang
1989-01-01
An account is given of the construction of efficient implementations of 'essentially nonoscillatory' (ENO) schemes that approximate systems of hyperbolic conservation laws. ENO schemes use a local adaptive stencil to automatically obtain information from regions of smoothness when the solution develops discontinuities. Approximations employing ENOs can thereby obtain uniformly high accuracy to the very onset of discontinuities, while retaining a sharp and essentially nonoscillatory shock transition. For ease of implementation, ENO schemes applying the adaptive stencil concept to the numerical fluxes and employing a TVD Runge-Kutta-type time discretization are constructed.
Unconditionally stable time marching scheme for Reynolds stress models
NASA Astrophysics Data System (ADS)
Mor-Yossef, Y.
2014-11-01
Progress toward a stable and efficient numerical treatment for the compressible Favre-Reynolds-averaged Navier-Stokes equations with a Reynolds-stress model (RSM) is presented. The mean-flow and the Reynolds stress model equations are discretized using finite differences on a curvilinear coordinates mesh. The convective flux is approximated by a third-order upwind biased MUSCL scheme. The diffusive flux is approximated using second-order central differencing, based on a full-viscous stencil. The novel time-marching approach relies on decoupled, implicit time integration, that is, the five mean-flow equations are solved separately from the seven Reynolds-stress closure equations. The key idea is the use of the unconditionally positive-convergent implicit scheme (UPC), originally developed for two-equation turbulence models. The extension of the UPC scheme for RSM guarantees the positivity of the normal Reynolds-stress components and the turbulence (specific) dissipation rate for any time step. Thanks to the UPC matrix-free structure and the decoupled approach, the resulting computational scheme is very efficient. Special care is dedicated to maintain the implicit operator compact, involving only nearest neighbor grid points, while fully supporting the larger discretized residual stencil. Results obtained from two- and three-dimensional numerical simulations demonstrate the significant progress achieved in this work toward optimally convergent solution of Reynolds stress models. Furthermore, the scheme is shown to be unconditionally stable and positive.
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
ERIC Educational Resources Information Center
Noakes, Peter
1976-01-01
Describes the operation of the National Electronics Council (NEC) Link Scheme for schools in Great Britain. The service is intended to provide technical assistance, information concerning surplus equipment, and guest speakers for school aspiring professional electronic counsel. (CP)
Dynamic remedial action scheme using online transient stability analysis
NASA Astrophysics Data System (ADS)
Shrestha, Arun
Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system
NASA Technical Reports Server (NTRS)
Banks, D. W.; Hafez, M. M.
1996-01-01
Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and it is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less
Spatio-temporal adaptation algorithm for two-dimensional reacting flows
NASA Astrophysics Data System (ADS)
Pervaiz, Mehtab M.; Baron, Judson R.
1988-01-01
A spatio-temporal adaptive algorithm for solving the unsteady Euler equations with chemical source terms is presented. Quadrilateral cells are used in two spatial dimensions which allow for embedded meshes tracking moving flow features with spatially varying time-steps which are multiples of global minimum time-steps. Blast wave interactions corresponding to a perfect gas (frozen) and a Lighthill dissociating gas (nonequilibrium) are considered for circular arc cascade and 90 degree bend duct geometries.
NASA Astrophysics Data System (ADS)
Jauberteau, F.; Temam, R. M.; Tribbia, J.
2014-08-01
In this paper, we study several multiscale/fractional step schemes for the numerical solution of the rotating shallow water equations with complex topography. We consider the case of periodic boundary conditions (f-plane model). Spatial discretization is obtained using a Fourier spectral Galerkin method. For the schemes presented in this paper we consider two approaches. The first approach (multiscale schemes) is based on topography scale separation and the numerical time integration is function of the scales. The second approach is based on a splitting of the operators, and the time integration method is function of the operator considered (fractional step schemes). The numerical results obtained are compared with the explicit reference scheme (Leap-Frog scheme). With these multiscale/fractional step schemes the objective is to propose new schemes giving numerical results similar to those obtained using only one uniform fine grid N×N and a time step Δt, but with a CPU time near the CPU time needed when using only one coarse grid N1×N1, N1
Mesh-based enhancement schemes in diffuse optical tomography.
Gu, Xuejun; Xu, Yong; Jiang, Huabei
2003-05-01
Two mesh-based methods including dual meshing and adaptive meshing are developed to improve the finite element-based reconstruction of both absorption and scattering images of heterogeneous turbid media. The idea of dual meshing scheme is to use a fine mesh for the solution of photon propagation and a coarse mesh for the inversion of optical property distributions. The adaptive meshing method is accomplished by the automatic mesh refinement in the region of heterogeneity during reconstruction. These schemes are validated using tissue-like phantom measurements. Our results demonstrate the capabilities of the dual meshing and adaptive meshing in both qualitative and quantitative improvement of optical image reconstruction.
An Analysis of Two Schemes to Numerically Solve the Stochastic Collection Growth Equation.
NASA Astrophysics Data System (ADS)
de Almeida, Fausto Carlos; Dennett, Roger D.
1980-12-01
Two schemes for the numerical solution of the stochastic collection growth equation for cloud drops are compared. Their numerical approaches are different. One (the Berry/Reinhardt method) emphasizes accuracy; the other (the Bleck method) emphasizes speed. Our analysis shows that for applications where the number of solutions (time steps) does not exceed 104 the accuracy-oriented scheme is faster. For larger, repetitive applications, such as a comprehensive cloud model, an objective analysis can be made on the merits of exchanging accuracy for computational time.
Adaptive Dynamic Bayesian Networks
Ng, B M
2007-10-26
A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.
Laser adaptive holographic hydrophone
NASA Astrophysics Data System (ADS)
Romashko, R. V.; Kulchin, Yu N.; Bezruk, M. N.; Ermolaev, S. A.
2016-03-01
A new type of a laser hydrophone based on dynamic holograms, formed in a photorefractive crystal, is proposed and studied. It is shown that the use of dynamic holograms makes it unnecessary to use complex optical schemes and systems for electronic stabilisation of the interferometer operating point. This essentially simplifies the scheme of the laser hydrophone preserving its high sensitivity, which offers the possibility to use it under a strong variation of the environment parameters. The laser adaptive holographic hydrophone implemented at present possesses the sensitivity at a level of 3.3 mV Pa-1 in the frequency range from 1 to 30 kHz.
Adaptive Force Control in Compliant Motion
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
NASA Astrophysics Data System (ADS)
Placidi, M.; Jung, J.-Y.; Ratti, A.; Sun, C.
2014-12-01
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
Location-adaptive transmission for indoor visible light communication
NASA Astrophysics Data System (ADS)
Wang, Chun-yue; Wang, Lang; Chi, Xue-fen
2016-01-01
A location-adaptive transmission scheme for indoor visible light communication (VLC) system is proposed in this paper. In this scheme, the symbol error rate ( SER) of less than 10-3 should be guaranteed. And the scheme is realized by the variable multilevel pulse-position modulation (MPPM), where the transmitters adaptively adjust the number of time slots n in the MPPM symbol according to the position of the receiver. The purpose of our scheme is to achieve the best data rate in the indoor different locations. The results show that the location-adaptive transmission scheme based on the variable MPPM is superior in the indoor VLC system.
NASA Astrophysics Data System (ADS)
Willkofer, Florian; Wood, Raul R.; Schmid, Josef; von Trentini, Fabian; Ludwig, Ralf
2016-04-01
The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. It builds on the conjoint analysis of a large ensemble of the CRCM5, driven by 50 members of the CanESM2, and the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change on the dynamics of extreme events. A critical point in the entire project is the preparation of a meteorological reference dataset with the required temporal (1-6h) and spatial (500m) resolution to be able to better evaluate hydrological extreme events in mesoscale river basins. For Bavaria a first reference data set (daily, 1km) used for bias-correction of RCM data was created by combining raster based data (E-OBS [1], HYRAS [2], MARS [3]) and interpolated station data using the meteorological interpolation schemes of the hydrological model WaSiM [4]. Apart from the coarse temporal and spatial resolution, this mosaic of different data sources is considered rather inconsistent and hence, not applicable for modeling of hydrological extreme events. Thus, the objective is to create a dataset with hourly data of temperature, precipitation, radiation, relative humidity and wind speed, which is then used for bias-correction of the RCM data being used as driver for hydrological modeling in the river basins. Therefore, daily data is disaggregated to hourly time steps using the 'Method of fragments' approach [5], based on available training stations. The disaggregation chooses fragments of daily values from observed hourly datasets, based on similarities in magnitude and behavior of previous and subsequent events. The choice of a certain reference station (hourly data, provision of fragments) for disaggregating daily station data (application
A semi-Lagrangian finite difference WENO scheme for scalar nonlinear conservation laws
NASA Astrophysics Data System (ADS)
Huang, Chieh-Sen; Arbogast, Todd; Hung, Chen-Hui
2016-10-01
For a nonlinear scalar conservation law in one-space dimension, we develop a locally conservative semi-Lagrangian finite difference scheme based on weighted essentially non-oscillatory reconstructions (SL-WENO). This scheme has the advantages of both WENO and semi-Lagrangian schemes. It is a locally mass conservative finite difference scheme, it is formally high-order accurate in space, it has small time truncation error, and it is essentially non-oscillatory. The scheme is nearly free of a CFL time step stability restriction for linear problems, and it has a relaxed CFL condition for nonlinear problems. The scheme can be considered as an extension of the SL-WENO scheme of Qiu and Shu (2011) [2] developed for linear problems. The new scheme is based on a standard sliding average formulation with the flux function defined using WENO reconstructions of (semi-Lagrangian) characteristic tracings of grid points. To handle nonlinear problems, we use an approximate, locally frozen trace velocity and a flux correction step. A special two-stage WENO reconstruction procedure is developed that is biased to the upstream direction. A Strang splitting algorithm is used for higher-dimensional problems. Numerical results are provided to illustrate the performance of the scheme and verify its formal accuracy. Included are applications to the Vlasov-Poisson and guiding-center models of plasma flow.
Yang, Tah-teh; Agrawal, A.K.; Kapat, J.S.
1993-06-01
Under contracted work with Morgantown Energy Technology Center, Clemson University, the prime contractor, and General Electric (GE) and CRSS, the subcontractors, made a comprehensive study in the first phase of research to investigate the technology barriers of integrating a coal gasification process with a hot gas cleanup scheme and the state-of-the-art industrial gas turbine, the GE MS-7001F. This effort focused on (1) establishing analytical tools necessary for modeling combustion phenomenon and emissions in gas turbine combustors operating on multiple species coal gas, (2) estimates the overall performance of the GE MS-7001F combined cycle plant, (3) evaluating material issues in the hot gas path, (4) examining the flow and temperature fields when air extraction takes place at both the compressor exit and at the manhole adjacent to the combustor, and (5) examining the combustion/cooling limitations of such a gas turbine by using 3-D numerical simulation of a MS-7001F combustor operated with gasified coal. In the second phase of this contract, a 35% cool flow model was built similar to GE`s MS-7001F gas turbine for mapping the flow region between the compressor exit and the expander inlet. The model included sufficient details, such as the combustor`s transition pieces, the fuel nozzles, and the supporting struts. Four cases were studied: the first with a base line flow field of a GE 7001F without air extraction; the second with a GE 7001F with air extraction; and the third and fourth with a GE 7001F using a Griffith diffuser to replace the straight wall diffuser and operating without air extraction and with extraction, respectively.
Discrete unified gas kinetic scheme for all Knudsen number flows: low-speed isothermal case.
Guo, Zhaoli; Xu, Kun; Wang, Ruijie
2013-09-01
Based on the Boltzmann-BGK (Bhatnagar-Gross-Krook) equation, in this paper a discrete unified gas kinetic scheme (DUGKS) is developed for low-speed isothermal flows. The DUGKS is a finite-volume scheme with the discretization of particle velocity space. After the introduction of two auxiliary distribution functions with the inclusion of collision effect, the DUGKS becomes a fully explicit scheme for the update of distribution function. Furthermore, the scheme is an asymptotic preserving method, where the time step is only determined by the Courant-Friedricks-Lewy condition in the continuum limit. Numerical results demonstrate that accurate solutions in both continuum and rarefied flow regimes can be obtained from the current DUGKS. The comparison between the DUGKS and the well-defined lattice Boltzmann equation method (D2Q9) is presented as well.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Implicit scheme for Maxwell equations solution in case of flat 3D domains
NASA Astrophysics Data System (ADS)
Boronina, Marina; Vshivkov, Vitaly
2016-02-01
We present a new finite-difference scheme for Maxwell's equations solution for three-dimensional domains with different scales in different directions. The stability condition of the standard leap-frog scheme requires decreasing of the time-step with decreasing of the minimal spatial step, which depends on the minimal domain size. We overcome the conditional stability by modifying the standard scheme adding implicitness in the direction of the smallest size. The new scheme satisfies the Gauss law for the electric and magnetic fields in the final- differences. The approximation order, the maintenance of the wave amplitude and propagation speed, the invariance of the wave propagation on angle with the coordinate axes are analyzed.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
A numerical scheme for ionizing shock waves
Aslan, Necdet . E-mail: naslan@yeditepe.edu.tr; Mond, Michael
2005-12-10
A two-dimensional (2D) visual computer code to solve the steady state (SS) or transient shock problems including partially ionizing plasma is presented. Since the flows considered are hypersonic and the resulting temperatures are high, the plasma is partially ionized. Hence the plasma constituents are electrons, ions and neutral atoms. It is assumed that all the above species are in thermal equilibrium, namely, that they all have the same temperature. The ionization degree is calculated from Saha equation as a function of electron density and pressure by means of a nonlinear Newton type root finding algorithms. The code utilizes a wave model and numerical fluctuation distribution (FD) scheme that runs on structured or unstructured triangular meshes. This scheme is based on evaluating the mesh averaged fluctuations arising from a number of waves and distributing them to the nodes of these meshes in an upwind manner. The physical properties (directions, strengths, etc.) of these wave patterns are obtained by a new wave model: ION-A developed from the eigen-system of the flux Jacobian matrices. Since the equation of state (EOS) which is used to close up the conservation laws includes electronic effects, it is a nonlinear function and it must be inverted by iterations to determine the ionization degree as a function of density and temperature. For the time advancement, the scheme utilizes a multi-stage Runge-Kutta (RK) algorithm with time steps carefully evaluated from the maximum possible propagation speed in the solution domain. The code runs interactively with the user and allows to create different meshes to use different initial and boundary conditions and to see changes of desired physical quantities in the form of color and vector graphics. The details of the visual properties of the code has been published before (see [N. Aslan, A visual fluctuation splitting scheme for magneto-hydrodynamics with a new sonic fix and Euler limit, J. Comput. Phys. 197 (2004) 1
ERIC Educational Resources Information Center
Wheeler, Mary L.
1994-01-01
Discusses the study of identification codes and check-digit schemes as a way to show students a practical application of mathematics and introduce them to coding theory. Examples include postal service money orders, parcel tracking numbers, ISBN codes, bank identification numbers, and UPC codes. (MKR)
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Hybrid subband image coding scheme using DWT, DPCM, and ADPCM
NASA Astrophysics Data System (ADS)
Oh, Kyung-Seak; Kim, Sung-Jin; Joo, Chang-Bok
1998-07-01
Subband image coding techniques have received considerable attention as a powerful source coding ones. These techniques provide good compression results, and also can be extended for progressive transmission and multiresolution analysis. In this paper, we propose a hybrid subband image coding scheme using DWT (discrete wavelet transform), DPCM (differential pulse code modulation), and ADPCM (adaptive DPCM). This scheme produces both simple, but significant, image compression and transmission coding.
An expert system based intelligent control scheme for space bioreactors
NASA Technical Reports Server (NTRS)
San, Ka-Yiu
1988-01-01
An expert system based intelligent control scheme is being developed for the effective control and full automation of bioreactor systems in space. The scheme developed will have the capability to capture information from various resources including heuristic information from process researchers and operators. The knowledge base of the expert system should contain enough expertise to perform on-line system identification and thus be able to adapt the controllers accordingly with minimal human supervision.
NASA Astrophysics Data System (ADS)
Tetsu, Hiroyuki; Nakamoto, Taishi
2016-03-01
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton-Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas & Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.
Elliott, C.J.; Fisher, H.; Pepin, J.; Gillmann, R.
1996-07-01
Traffic classification techniques were evaluated using data from a 1993 investigation of the traffic flow patterns on I-20 in Georgia. First we improved the data by sifting through the data base, checking against the original video for questionable events and removing and/or repairing questionable events. We used this data base to critique the performance quantitatively of a classification method known as Scheme F. As a context for improving the approach, we show in this paper that scheme F can be represented as a McCullogh-Pitts neural network, oar as an equivalent decomposition of the plane. We found that Scheme F, among other things, severely misrepresents the number of vehicles in Class 3 by labeling them as Class 2. After discussing the basic classification problem in terms of what is measured, and what is the desired prediction goal, we set forth desirable characteristics of the classification scheme and describe a recurrent neural network system that partitions the high dimensional space up into bins for each axle separation. the collection of bin numbers, one for each of the axle separations, specifies a region in the axle space called a hyper-bin. All the vehicles counted that have the same set of in numbers are in the same hyper-bin. The probability of the occurrence of a particular class in that hyper- bin is the relative frequency with which that class occurs in that set of bin numbers. This type of algorithm produces classification results that are much more balanced and uniform with respect to Classes 2 and 3 and Class 10. In particular, the cancellation of errors of classification that occurs is for many applications the ideal classification scenario. The neural network results are presented in the form of a primary classification network and a reclassification network, the performance matrices for which are presented.
High Order Finite Volume Nonlinear Schemes for the Boltzmann Transport Equation
Bihari, B L; Brown, P N
2005-03-29
The authors apply the nonlinear WENO (Weighted Essentially Nonoscillatory) scheme to the spatial discretization of the Boltzmann Transport Equation modeling linear particle transport. The method is a finite volume scheme which ensures not only conservation, but also provides for a more natural handling of boundary conditions, material properties and source terms, as well as an easier parallel implementation and post processing. It is nonlinear in the sense that the stencil depends on the solution at each time step or iteration level. By biasing the gradient calculation towards the stencil with smaller derivatives, the scheme eliminates the Gibb's phenomenon with oscillations of size O(1) and reduces them to O(h{sup r}), where h is the mesh size and r is the order of accuracy. The current implementation is three-dimensional, generalized for unequally spaced meshes, fully parallelized, and up to fifth order accurate (WENO5) in space. For unsteady problems, the resulting nonlinear spatial discretization yields a set of ODE's in time, which in turn is solved via high order implicit time-stepping with error control. For the steady-state case, they need to solve the non-linear system, typically by Newton-Krylov iterations. There are several numerical examples presented to demonstrate the accuracy, non-oscillatory nature and efficiency of these high order methods, in comparison with other fixed-stencil schemes.
Comparison of SMAC, PISO, and iterative time-advancing schemes for unsteady flows
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook; Benson, Thomas J.
1991-01-01
Calculations of unsteady flows using a simplified marker and cell (SMAC), a pressure implicit splitting of operators (PSIO), and an iterative time advancing scheme (ITA) are presented. A partial differential equation for incremental pressure is used in each time advancing scheme. Example flows considered are a polar cavity flow starting from rest and self-sustained oscillating flows over a circular and a square cylinder. For a large time step size, the SMAC and ITA are more strongly convergent and yield more accurate results than PSIO. The SMAC is the most efficient computationally. For a small time step size, the three time advancing schemes yield equally accurate Strouhal numbers. The capability of each time advancing scheme to accurately resolve unsteady flows is attributed to the use of new pressure correction algorithm that can strongly enforce the conservation of mass. The numerical results show that the low frequency of the vortex shedding is caused by the growth time of each vortex shed into the wake region.
Development of advanced control schemes for telerobot manipulators
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1991-01-01
To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.
Simulation of transients in natural gas pipelines using hybrid TVD schemes
NASA Astrophysics Data System (ADS)
Zhou, Junyang; Adewumi, Michael A.
2000-02-01
The mathematical model describing transients in natural gas pipelines constitutes a non-homogeneous system of non-linear hyperbolic conservation laws. The time splitting approach is adopted to solve this non-homogeneous hyperbolic model. At each time step, the non-homogeneous hyperbolic model is split into a homogeneous hyperbolic model and an ODE operator. An explicit 5-point, second-order-accurate total variation diminishing (TVD) scheme is formulated to solve the homogeneous system of non-linear hyperbolic conservation laws. Special attention is given to the treatment of boundary conditions at the inlet and the outlet of the pipeline. Hybrid methods involving the Godunov scheme (TVD/Godunov scheme) or the Roe scheme (TVD/Roe scheme) or the Lax-Wendroff scheme (TVD/LW scheme) are used to achieve appropriate boundary handling strategy. A severe condition involving instantaneous closure of a downstream valve is used to test the efficacy of the new schemes. The results produced by the TVD/Roe and TVD/Godunov schemes are excellent and comparable with each other, while the TVD/LW scheme performs reasonably well. The TVD/Roe scheme is applied to simulate the transport of a fast transient in a short pipe and the propagation of a slow transient in a long transmission pipeline. For the first example, the scheme produces excellent results, which capture and maintain the integrity of the wave fronts even after a long time. For the second example, comparisons of computational results are made using different discretizing parameters. Copyright
Multi-dimensional ENO schemes for general geometries
NASA Technical Reports Server (NTRS)
Harten, Ami; Chakravarthy, Sukumar R.
1991-01-01
A class of ENO schemes is presented for the numerical solution of multidimensional hyperbolic systems of conservation laws in structured and unstructured grids. This is a class of shock-capturing schemes which are designed to compute cell-averages to high order accuracy. The ENO scheme is composed of a piecewise-polynomial reconstruction of the solution form its given cell-averages, approximate evolution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is based on an adaptive selection of stencil for each cell so as to avoid spurious oscillations near discontinuities while achieving high order of accuracy away from them.
Implicit schemes and parallel computing in unstructured grid CFD
NASA Technical Reports Server (NTRS)
Venkatakrishnam, V.
1995-01-01
The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.
Popov, Pavel P. Pope, Stephen B.
2014-01-15
This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes.
NASA Astrophysics Data System (ADS)
Kwon, Deuk-Chul; Song, Mi-Young; Yoon, Jung-Sik
2014-10-01
It is well known that the dielectric relaxation scheme (DRS) can efficiently overcome the limitation on the simulation time step for fluid transport simulations of high density plasma discharges. By imitating a realistic and physical shielding process of electric field perturbation, the DRS overcomes the dielectric limitation on time step. However, the electric field was obtained with assuming the drift-diffusion approximation. Although the drift-diffusion expressions are good approximations for both the electrons and ions at high pressure, the inertial term cannot be neglected in the ion momentum equation for low pressure. Therefore, in this work, we developed the extended DRS by introducing an effective electric field. To compare the extended DRS with the previous method, two-dimensional fluid simulations for inductively coupled plasma discharges were performed. This work was supported by the Industrial Strategic Technology Development Program (10041637, Development of Dry Etch System for 10 nm class SADP Process) funded by the Ministry of Knowledge Economy (MKE, Korea).
A comparison of SPH schemes for the compressible Euler equations
NASA Astrophysics Data System (ADS)
Puri, Kunal; Ramachandran, Prabhu
2014-01-01
We review the current state-of-the-art Smoothed Particle Hydrodynamics (SPH) schemes for the compressible Euler equations. We identify three prototypical schemes and apply them to a suite of test problems in one and two dimensions. The schemes are in order, standard SPH with an adaptive density kernel estimation (ADKE) technique introduced Sigalotti et al. (2008) [44], the variational SPH formulation of Price (2012) [33] (referred herein as the MPM scheme) and the Godunov type SPH (GSPH) scheme of Inutsuka (2002) [12]. The tests investigate the accuracy of the inviscid discretizations, shock capturing ability and the particle settling behavior. The schemes are found to produce nearly identical results for the 1D shock tube problems with the MPM and GSPH schemes being the most robust. The ADKE scheme requires parameter values which must be tuned to the problem at hand. We propose an addition of an artificial heating term to the GSPH scheme to eliminate unphysical spikes in the thermal energy at the contact discontinuity. The resulting modification is simple and can be readily incorporated in existing codes. In two dimensions, the differences between the schemes is more evident with the quality of results determined by the particle distribution. In particular, the ADKE scheme shows signs of particle clumping and irregular motion for the 2D strong shock and Sedov point explosion tests. The noise in particle data is linked with the particle distribution which remains regular for the Hamiltonian formulations (MPM and GSPH) and becomes irregular for the ADKE scheme. In the interest of reproducibility, we make available our implementation of the algorithms and test problems discussed in this work.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
Yasas, F M
1977-01-01
In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries. PMID:12265562
Implicit-explicit Godunov schemes for unsteady gas dynamics
Collins, J.P.
1992-12-31
Hybrid implicit-explicit schemes are developed for Eulerian hydrodynamics in one and two space dimensions. The hybridization is a continuous switch and operates on each characteristic field separately. The explicit scheme is a version of the second order Godunov scheme; the implicit method is only first order accurate in time but leads to second order accurate steady states. This methodology is developed for linear advection, nonlinear scalar problems, hyperbolic constant co-efficient systems, and for gas dynamics. Truncation error and stability analyses are done for the linear cases. This implicit-explicit strategy is intended for problems with spatially or temporally localized stiffness in wave speeds. By stiffness we mean that the high speed modes contain very little energy, yet they determine the explicit time step through the CFL condition. For hydrodynamics, the main examples are nearly incompressible flow, flows with embedded boundary layers, and magnetohydrodynamics; the latter two examples are not treated here. Several numerical results are presented to demonstrate this method. These include, stable numerical shocks at very high CFL numbers, one-dimensional flow in a duct, and low Mach number shear layers.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
Foglietta, J.H.
1999-07-01
A new LNG cycle has been developed for base load liquefaction facilities. This new design offers a different technical and economical solution comparing in efficiency with the classical technologies. The new LNG scheme could offer attractive business opportunities to oil and gas companies that are trying to find paths to monetize gas sources more effectively; particularly for remote or offshore locations where smaller scale LNG facilities might be applicable. This design offers also an alternative route to classic LNG projects, as well as alternative fuel sources. Conceived to offer simplicity and access to industry standard equipment, This design is a hybrid result of combining a standard refrigeration system and turboexpander technology.
Fluidity: A New Adaptive, Unstructured Mesh Geodynamics Model
NASA Astrophysics Data System (ADS)
Davies, D. R.; Wilson, C. R.; Kramer, S. C.; Piggott, M. D.; Le Voci, G.; Collins, G. S.
2010-05-01
Fluidity is a sophisticated fluid dynamics package, which has been developed by the Applied Modelling and Computation Group (AMCG) at Imperial College London. It has many environmental applications, from nuclear reactor safety to simulations of ocean circulation. Fluidity has state-of-the-art features that place it at the forefront of computational fluid dynamics. The code: Dynamically optimizes the mesh, providing increased resolution in areas of dynamic importance, thus allowing for accurate simulations across a range of length scales, within a single model. Uses an unstructured mesh, which enables the representation of complex geometries. It also enhances mesh optimization using anisotropic elements, which are particularly useful for resolving one-dimensional flow features and material interfaces. Uses implicit solvers thus allowing for large time-steps with minimal loss of accuracy. PETSc provides some of these, though multigrid preconditioning methods have been developed in-house. Is optimized to run on parallel processors and has the ability to perform parallel mesh adaptivity - the subdomains used in parallel computing automatically adjust themselves to balance the computational load on each processor, as the mesh evolves. Has a novel interface-preserving advection scheme for maintaining sharp interfaces between multiple materials / components. Has an automated test-bed for verification of model developments. Such attributes provide an extremely powerful base on which to build a new geodynamical model. Incorporating into Fluidity the necessary physics and numerical technology for geodynamical flows is an ongoing task, though progress, to date, includes: Development and implementation of parallel, scalable solvers for Stokes flow, which can handle sharp, orders of magnitude variations in viscosity and, significantly, an anisotropic viscosity tensor. Modification of the multi-material interface-preserving scheme to allow for tracking of chemical
Numerical issues for coupling biological models with isopycnal mixing schemes
NASA Astrophysics Data System (ADS)
Gnanadesikan, Anand
1999-01-01
In regions of sloping isopycnals, isopycnal mixing acting in conjunction with biological cycling can produce patterns in the nutrient field which have negative values of tracer in light water and unrealistically large values of tracer in dense water. Under certain circumstances, these patterns can start to grow unstably. This paper discusses why such behavior occurs. Using a simple four-box model, it demonstrates that the instability appears when the isopycnal slopes exceed the grid aspect ratio ( Δz/ Δx). In contrast to other well known instabilities of the CFL type, this instability does not depend on the time step or time-stepping scheme. Instead it arises from a fundamental incompatibility between two requirements for isopycnal mixing schemes, namely that they should produce no net flux of passive tracer across an isopycnal and everywhere reduce tracer extrema. In order to guarantee no net flux of tracer across an isopycnal, some upgradient fluxes across certain parts of an isopycnal are required to balance downgradient fluxes across other parts of the isopycnal. However, these upgradient fluxes can cause local maxima in the nutrient field to become self-reinforcing. Although this is less of a problem in larger domains, there is still a strong tendency for isopycnal mixing to overconcentrate tracer in the dense water. The introduction of eddy-induced advection is shown to be capable of counteracting the upgradient fluxes of nutrient which cause problems, stabilizing the solution. The issue is not simply a numerical curiosity. When used in a GCM, different parameterizations of eddy mixing result in noticeably different distributions of nutrient and large differences in biological production. While much of this is attributable to differences in convection and circulation, the numerical errors described here may also play an important role in runs with isopycnal mixing alone.
NASA Astrophysics Data System (ADS)
Etemadsaeed, Leila; Moczo, Peter; Kristek, Jozef; Ansari, Anooshiravan; Kristekova, Miriam
2016-10-01
We investigate the problem of finite-difference approximations of the velocity-stress formulation of the equation of motion and constitutive law on the staggered grid (SG) and collocated grid (CG). For approximating the first spatial and temporal derivatives, we use three approaches: Taylor expansion (TE), dispersion-relation preserving (DRP), and combined TE-DRP. The TE and DRP approaches represent two fundamental extremes. We derive useful formulae for DRP and TE-DRP approximations. We compare accuracy of the numerical wavenumbers and numerical frequencies of the basic TE, DRP and TE-DRP approximations. Based on the developed approximations, we construct and numerically investigate 14 basic TE, DRP and TE-DRP finite-difference schemes on SG and CG. We find that (1) the TE second-order in time, TE fourth-order in space, 2-point in time, 4-point in space SG scheme (that is the standard (2,4) VS SG scheme, say TE-2-4-2-4-SG) is the best scheme (of the 14 investigated) for large fractions of the maximum possible time step, or, in other words, in a homogeneous medium; (2) the TE second-order in time, combined TE-DRP second-order in space, 2-point in time, 4-point in space SG scheme (say TE-DRP-2-2-2-4-SG) is the best scheme for small fractions of the maximum possible time step, or, in other words, in models with large velocity contrasts if uniform spatial grid spacing and time step are used. The practical conclusion is that in computer codes based on standard TE-2-4-2-4-SG, it is enough to redefine the values of the approximation coefficients by those of TE-DRP-2-2-2-4-SG for increasing accuracy of modelling in models with large velocity contrast between rock and sediments.
Recent developments in shock-capturing schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
The development of the shock capturing methodology is reviewed, paying special attention to the increasing nonlinearity in its design and its relation to interpolation. It is well-known that higher-order approximations to a discontinuous function generate spurious oscillations near the discontinuity (Gibbs phenomenon). Unlike standard finite-difference methods which use a fixed stencil, modern shock capturing schemes use an adaptive stencil which is selected according to the local smoothness of the solution. Near discontinuities this technique automatically switches to one-sided approximations, thus avoiding the use of discontinuous data which brings about spurious oscillations.
Implicit approximate-factorization schemes for the low-frequency transonic equation
NASA Technical Reports Server (NTRS)
Ballhaus, W. F.; Steger, J. L.
1975-01-01
Two- and three-level implicit finite-difference algorithms for the low-frequency transonic small disturbance-equation are constructed using approximate factorization techniques. The schemes are unconditionally stable for the model linear problem. For nonlinear mixed flows, the schemes maintain stability by the use of conservatively switched difference operators for which stability is maintained only if shock propagation is restricted to be less than one spatial grid point per time step. The shock-capturing properties of the schemes were studied for various shock motions that might be encountered in problems of engineering interest. Computed results for a model airfoil problem that produces a flow field similar to that about a helicopter rotor in forward flight show the development of a shock wave and its subsequent propagation upstream off the front of the airfoil.
A practical numerical scheme for the ternary Cahn-Hilliard system with a logarithmic free energy
NASA Astrophysics Data System (ADS)
Jeong, Darae; Kim, Junseok
2016-01-01
We consider a practically stable finite difference method for the ternary Cahn-Hilliard system with a logarithmic free energy modeling the phase separation of a three-component mixture. The numerical scheme is based on a linear unconditionally gradient stable scheme by Eyre and is solved by an efficient and accurate multigrid method. The logarithmic function has a singularity at zero. To remove the singularity, we regularize the function near zero by using a quadratic polynomial approximation. We perform a convergence test, a linear stability analysis, and a robustness test of the ternary Cahn-Hilliard equation. We observe that our numerical solutions are convergent, consistent with the exact solutions of linear stability analysis, and stable with practically large enough time steps. Using the proposed numerical scheme, we also study the temporal evolution of morphology patterns during phase separation in one-, two-, and three-dimensional spaces.
A central-upwind scheme with artificial viscosity for shallow-water flows in channels
NASA Astrophysics Data System (ADS)
Hernandez-Duenas, Gerardo; Beljadid, Abdelaziz
2016-10-01
We develop a new high-resolution, non-oscillatory semi-discrete central-upwind scheme with artificial viscosity for shallow-water flows in channels with arbitrary geometry and variable topography. The artificial viscosity, proposed as an alternative to nonlinear limiters, allows us to use high-resolution reconstructions at a low computational cost. The scheme recognizes steady states at rest when a delicate balance between the source terms and flux gradients occurs. This balance in irregular geometries is more complex than that taking place in channels with vertical walls. A suitable technique is applied by properly taking into account the effects induced by the geometry. Incorporating the contributions of the artificial viscosity and an appropriate time step restriction, the scheme preserves the positivity of the water's depth. A description of the proposed scheme, its main properties as well as the proofs of well-balance and the positivity of the scheme are provided. Our numerical experiments confirm stability, well-balance, positivity-preserving properties and high resolution of the proposed method. Comparisons of numerical solutions obtained with the proposed scheme and experimental data are conducted, showing a good agreement. This scheme can be applied to shallow-water flows in channels with complex geometry and variable bed topography.
SEAWAT 2000: modelling unstable flow and sensitivity to discretization levels and numerical schemes
NASA Astrophysics Data System (ADS)
Al-Maktoumi, A.; Lockington, D. A.; Volker, R. E.
2007-09-01
A systematic analysis shows how results from the finite difference code SEAWAT are sensitive to choice of grid dimension, time step, and numerical scheme for unstable flow problems. Guidelines to assist in selecting appropriate combinations of these factors are suggested. While the SEAWAT code has been tested for a wide range of problems, the sensitivity of results to spatial and temporal discretization levels and numerical schemes has not been studied in detail for unstable flow problems. Here, the Elder-Voss-Souza benchmark problem has been used to systematically explore the sensitivity of SEAWAT output to spatio-temporal resolution and numerical solver choice. A grid size of 0.38 and 0.60% of the total domain length and depth respectively is found to be fine enough to deliver results with acceptable accuracy for most of the numerical schemes when Courant number (Cr) is 0.1. All numerical solvers produced similar results for extremely fine meshes; however, some schemes converged faster than others. For instance, the 3rd-order total variation-diminishing method (TVD3) scheme converged at a much coarser mesh than the standard finite difference methods (SFDM) upstream weighting (UW) scheme. The sensitivity of the results to Cr number depends on the numerical scheme as expected.
Decentralized digital adaptive control of robot motion
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.
Discrete unified gas kinetic scheme for all Knudsen number flows. II. Thermal compressible case.
Guo, Zhaoli; Wang, Ruijie; Xu, Kun
2015-03-01
This paper is a continuation of our work on the development of multiscale numerical scheme from low-speed isothermal flow to compressible flows at high Mach numbers. In our earlier work [Z. L. Guo et al., Phys. Rev. E 88, 033305 (2013)], a discrete unified gas kinetic scheme (DUGKS) was developed for low-speed flows in which the Mach number is small so that the flow is nearly incompressible. In the current work, we extend the scheme to compressible flows with the inclusion of thermal effect and shock discontinuity based on the gas kinetic Shakhov model. This method is an explicit finite-volume scheme with the coupling of particle transport and collision in the flux evaluation at a cell interface. As a result, the time step of the method is not limited by the particle collision time. With the variation of the ratio between the time step and particle collision time, the scheme is an asymptotic preserving (AP) method, where both the Chapman-Enskog expansion for the Navier-Stokes solution in the continuum regime and the free transport mechanism in the rarefied limit can be precisely recovered with a second-order accuracy in both space and time. The DUGKS is an idealized multiscale method for all Knudsen number flow simulations. A number of numerical tests, including the shock structure problem, the Sod tube problem in a whole range of degree of rarefaction, and the two-dimensional Riemann problem in both continuum and rarefied regimes, are performed to validate the scheme. Comparisons with the results of direct simulation Monte Carlo (DSMC) and other benchmark data demonstrate that the DUGKS is a reliable and efficient method for multiscale flow problems. PMID:25871252
Discrete unified gas kinetic scheme for all Knudsen number flows. II. Thermal compressible case
NASA Astrophysics Data System (ADS)
Guo, Zhaoli; Wang, Ruijie; Xu, Kun
2015-03-01
This paper is a continuation of our work on the development of multiscale numerical scheme from low-speed isothermal flow to compressible flows at high Mach numbers. In our earlier work [Z. L. Guo et al., Phys. Rev. E 88, 033305 (2013), 10.1103/PhysRevE.88.033305], a discrete unified gas kinetic scheme (DUGKS) was developed for low-speed flows in which the Mach number is small so that the flow is nearly incompressible. In the current work, we extend the scheme to compressible flows with the inclusion of thermal effect and shock discontinuity based on the gas kinetic Shakhov model. This method is an explicit finite-volume scheme with the coupling of particle transport and collision in the flux evaluation at a cell interface. As a result, the time step of the method is not limited by the particle collision time. With the variation of the ratio between the time step and particle collision time, the scheme is an asymptotic preserving (AP) method, where both the Chapman-Enskog expansion for the Navier-Stokes solution in the continuum regime and the free transport mechanism in the rarefied limit can be precisely recovered with a second-order accuracy in both space and time. The DUGKS is an idealized multiscale method for all Knudsen number flow simulations. A number of numerical tests, including the shock structure problem, the Sod tube problem in a whole range of degree of rarefaction, and the two-dimensional Riemann problem in both continuum and rarefied regimes, are performed to validate the scheme. Comparisons with the results of direct simulation Monte Carlo (DSMC) and other benchmark data demonstrate that the DUGKS is a reliable and efficient method for multiscale flow problems.
An improved SPH scheme for cosmological simulations
NASA Astrophysics Data System (ADS)
Beck, A. M.; Murante, G.; Arth, A.; Remus, R.-S.; Teklu, A. F.; Donnert, J. M. F.; Planelles, S.; Beck, M. C.; Förster, P.; Imgrund, M.; Dolag, K.; Borgani, S.
2016-01-01
We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we implement and test a vast majority of SPH improvement in the developer version of GADGET-3. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity including high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard GADGET-SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic discs to be colder and more extended and galaxy clusters showing entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard GADGET-SPH, thus becoming the core of an efficient code for large cosmological simulations.
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
A novel on-board switch scheme based on OFDM
NASA Astrophysics Data System (ADS)
Dang, Jun-Hong; Zhou, Po; Cao, Zhi-Gang
2009-12-01
OFDM is a new focused technology in satellite communication. This paper proposed a novel OFDM based on-board switching technology which has high spectrum efficiency and adaptability and supports the integration of terrestrial wireless communication systems and satellite communication systems. Then it introduced a realization scheme of this technology, and proposed the main problems to be solved and the relevant solutions of them.
An adaptive routing scheme in scale-free networks
NASA Astrophysics Data System (ADS)
Ben Haddou, Nora; Ez-Zahraouy, Hamid; Benyoussef, Abdelilah
2015-05-01
We suggest an optimal form of traffic awareness already introduced as a routing protocol which combines structural and local dynamic properties of the network to determine the followed path between source and destination of the packet. Instead of using the shortest path, we incorporate the "efficient path" in the protocol and we propose a new parameter α that controls the contribution of the queue in the routing process. Compared to the original model, the capacity of the network can be improved more than twice when using the optimal conditions of our model. Moreover, the adjustment of the proposed parameter allows the minimization of the travel time.
NASA Technical Reports Server (NTRS)
Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James
2014-01-01
A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.
Comparison of thresholding schemes for visible light communication using mobile-phone image sensor.
Liu, Yang; Chow, Chi-Wai; Liang, Kevin; Chen, Hung-Yu; Hsu, Chin-Wei; Chen, Chung-Yen; Chen, Shih-Hao
2016-02-01
Based on the rolling shutter effect of the complementary metal-oxide-semiconductor (CMOS) image sensor, bright and dark fringes can be observed in each received frame. By demodulating the bright and dark fringes, the visible light communication (VLC) data logic can be retrieved. However, demodulating the bright and dark fringes is challenging as there is a high data fluctuation and large extinction ratio (ER) variation in each frame due. Hence proper thresholding scheme is needed. In this work, we propose and compare experimentally three thresholding schemes; including third-order polynomial curve fitting, iterative scheme and quick adaptive scheme. The evaluation of these three thresholding schemes is performed.
Direct adaptive control of manipulators in Cartesian space
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.
NASA Astrophysics Data System (ADS)
Gillibrand, P. A.; Herzfeld, M.
2016-05-01
We present a flux-form semi-Lagrangian (FFSL) advection scheme designed for offline scalar transport simulation with coastal ocean models using curvilinear horizontal coordinates. The scheme conserves mass, overcoming problems of mass conservation typically experienced with offline transport models, and permits long time steps (relative to the Courant number) to be used by the offline model. These attributes make the method attractive for offline simulation of tracers in biogeochemical or sediment transport models using archived flow fields from hydrodynamic models. We describe the FFSL scheme, and test it on two idealised domains and one real domain, the Great Barrier Reef in Australia. For comparison, we also include simulations using a traditional semi-Lagrangian advection scheme for the offline simulations. We compare tracer distributions predicted by the offline FFSL transport scheme with those predicted by the original hydrodynamic model, assess the conservation of mass in all cases and contrast the computational efficiency of the schemes. We find that the FFSL scheme produced very good agreement with the distributions of tracer predicted by the hydrodynamic model, and conserved mass with an error of a fraction of one percent. In terms of computational speed, the FFSL scheme was comparable with the semi-Lagrangian method and an order of magnitude faster than the full hydrodynamic model, even when the latter ran in parallel on multiple cores. The FFSL scheme presented here therefore offers a viable mass-conserving and computationally-efficient alternative to traditional semi-Lagrangian schemes for offline scalar transport simulation in coastal models.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Adaptive control of robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.
Hunt, R.L.
1983-12-27
An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
Philip, Bobby; Wang, Zhen; Berrill, Mark A; Rodriguez Rodriguez, Manuel; Pernice, Michael
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Meal-feeding scheme: twenty years of research in Brazil.
Bazotte, R B; Batista, M R; Curi, R
2000-09-01
Naomi Shinomiya Hell was the first researcher to investigate the physiological adaptations to a meal-feeding scheme (MFS) in Brazil. Over a period of 20 years, from 1979 to 1999, Naomi's group determined the physiological and metabolic adaptations induced by this feeding scheme in rats. The group showed the persistence of such adaptations even when MFS is associated with moderate exercise training and the performance to a session of intense physical effort. The metabolic changes induced by the feeding training were discriminated from those caused by the effective fasting period. Naomi made an important contribution to the understanding of the MFS but a lot still has to be done. One crucial question still remains to be satisfactorily answered: what is the ideal control for the MFS? PMID:10973128
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
Solving Chemical Master Equations by an Adaptive Wavelet Method
Jahnke, Tobias; Galan, Steffen
2008-09-01
Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.
Chemical equilibrium and non-equilibrium inviscid flow computations using a centered scheme
NASA Astrophysics Data System (ADS)
Vos, J. B.; Bergman, C. M.
had to be changed. This has been done using the effective Y approach, where γ is the effective ratio of specific heats. For explicit schemes, this y needs to be calculated only once per 100 time steps. Non-equilibrium chemistry has been incorporated by solving three partial differential equations for the partial densities of the species N, O and NO using the Runge Kutta scheme described above, together with two algebraic equations for the species N2 and 02. The time step used in the Runge Kutta time stepping is in this case the minimum of the chemical time step and the fluid dynamic time step. Calculations showed that only in the initial phase of the time integration process the chemical time step was the smallest of the two. Figure 1 shows the calculated temperatures for the flow around a sphere. Owing to the shock fitting procedure the external bow shock is sharp and oscillation free. The highest temperatures are found when the gas is treated as frozen since in this case no dissociation is taking place. The lowest temperatures along the stagnation line are obtained when it is assumed that the flow is in equilibrium. For the non-equilibrium calculation, the flow is frozen across the shock wave, and is in chemical equilibrium at the stagnation point. This explains the strong temperature gradient between shock and body.
Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
NASA Astrophysics Data System (ADS)
Seny, Bruno; Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2 Click for larger view
These two graphics are planning tools used by Mars Exploration Rover engineers to plot and scheme the perfect location to place the rock abrasion tool on the rock collection dubbed 'El Capitan' near Opportunity's landing site. 'El Capitan' is located within a larger outcrop nicknamed 'Opportunity Ledge.'
The rover visualization team from NASA Ames Research Center, Moffett Field, Calif., initiated the graphics by putting two panoramic camera images of the 'El Capitan' area into their three-dimensional model. The rock abrasion tool team from Honeybee Robotics then used the visualization tool to help target and orient their instrument on the safest and most scientifically interesting locations. The blue circle represents one of two current targets of interest, chosen because of its size, lack of dust, and most of all its distinct and intriguing geologic features. To see the second target location, see the image titled 'Plotting and Scheming.'
The rock abrasion tool is sensitive to the shape and texture of a rock, and must safely sit within the 'footprint' indicated by the blue circles. The rock area must be large enough to fit the contact sensor and grounding mechanism within the area of the outer blue circle, and the rock must be smooth enough to get an even grind within the abrasion area of the inner blue circle. If the rock abrasion tool were not grounded by its support mechanism or if the surface were uneven, it could 'run away' from its target. The rock abrasion tool is location on the rover's instrument deployment device, or arm.
Over the next few martian days, or sols, the rover team will use these and newer, similar graphics created with more recent, higher-resolution panoramic camera images and super-spectral data from the miniature thermal emission spectrometer. These data will be used to pick the best
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
ERIC Educational Resources Information Center
Harrell, William
1999-01-01
Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)
Anstis, Stuart
2013-01-01
It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.
N-Body Code with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Yahagi, Hideki; Yoshii, Yuzuru
2001-09-01
We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.
Test Information Targeting Strategies for Adaptive Multistage Testing Designs.
ERIC Educational Resources Information Center
Luecht, Richard M.; Burgin, William
Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.
2009-12-01
Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not
NASA Technical Reports Server (NTRS)
Allen, Dale J.; Douglass, Anne R.; Rood, Richard B.; Guthrie, Paul D.
1991-01-01
The application of van Leer's scheme, a monotonic, upstream-biased differencing scheme, to three-dimensional constituent transport calculations is shown. The major disadvantage of the scheme is shown to be a self-limiting diffusion. A major advantage of the scheme is shown to be its ability to maintain constituent correlations. The scheme is adapted for a spherical coordinate system with a hybrid sigma-pressure coordinate in the vertical. Special consideration is given to cross-polar flow. The vertical wind calculation is shown to be extremely sensitive to the method of calculating the divergence. This sensitivity implies that a vertical wind formulation consistent with the transport scheme is essential for accurate transport calculations. The computational savings of the time-splitting method used to solve this equation are shown. Finally, the capabilities of this scheme are illustrated by an ozone transport and chemistry model simulation.
NASA Astrophysics Data System (ADS)
Zhao, Jia; Yang, Xiaofeng; Shen, Jie; Wang, Qi
2016-01-01
We develop a linear, first-order, decoupled, energy-stable scheme for a binary hydrodynamic phase field model of mixtures of nematic liquid crystals and viscous fluids that satisfies an energy dissipation law. We show that the semi-discrete scheme in time satisfies an analogous, semi-discrete energy-dissipation law for any time-step and is therefore unconditionally stable. We then discretize the spatial operators in the scheme by a finite-difference method and implement the fully discrete scheme in a simplified version using CUDA on GPUs in 3 dimensions in space and time. Two numerical examples for rupture of nematic liquid crystal filaments immersed in a viscous fluid matrix are given, illustrating the effectiveness of this new scheme in resolving complex interfacial phenomena in free surface flows of nematic liquid crystals.
Estimator reduction and convergence of adaptive BEM.
Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk
2012-06-01
A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.
Estimator reduction and convergence of adaptive BEM
Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk
2012-01-01
A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations. PMID:23482248
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Importance biasing scheme implemented in the PRIZMA code
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
Classification schemes for arteriovenous malformations.
Davies, Jason M; Kim, Helen; Young, William L; Lawton, Michael T
2012-01-01
The wide variety of arteriovenous malformation (AVM) anatomy, size, location, and clinical presentation makes patient selection for surgery a difficult process. Neurosurgeons have identified key factors that determine the risks of surgery and then devised classification schemes that integrate these factors, predict surgical results, and help select patients for surgery. These classification schemes have value because they transform complex decisions into simpler algorithms. In this review, the important grading schemes that have contributed to management of patients with brain AVMs are described, and our current approach to patient selection is outlined.
Adaptive Force Control For Compliant Motion Of A Robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Two adaptive control schemes offer robust solutions to problem of stable control of forces of contact between robotic manipulator and objects in its environment. They are called "adaptive admittance control" and "adaptive compliance control." Both schemes involve use of force-and torque sensors that indicate contact forces. These schemes performed well when tested in computational simulations in which they were used to control seven-degree-of-freedom robot arm in executing contact tasks. Choice between admittance or compliance control is dictated by requirements of the application at hand.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Balsara, Dinshaw S.; Dumbser, Michael
2014-06-01
In this paper we use the genuinely multidimensional HLL Riemann solvers recently developed by Balsara et al. in [13] to construct a new class of computationally efficient high order Lagrangian ADER-WENO one-step ALE finite volume schemes on unstructured triangular meshes. A nonlinear WENO reconstruction operator allows the algorithm to achieve high order of accuracy in space, while high order of accuracy in time is obtained by the use of an ADER time-stepping technique based on a local space-time Galerkin predictor. The multidimensional HLL and HLLC Riemann solvers operate at each vertex of the grid, considering the entire Voronoi neighborhood of each node and allow for larger time steps than conventional one-dimensional Riemann solvers. The results produced by the multidimensional Riemann solver are then used twice in our one-step ALE algorithm: first, as a node solver that assigns a unique velocity vector to each vertex, in order to preserve the continuity of the computational mesh; second, as a building block for genuinely multidimensional numerical flux evaluation that allows the scheme to run with larger time steps compared to conventional finite volume schemes that use classical one-dimensional Riemann solvers in normal direction. The space-time flux integral computation is carried out at the boundaries of each triangular space-time control volume using the Simpson quadrature rule in space and Gauss-Legendre quadrature in time. A rezoning step may be necessary in order to overcome element overlapping or crossing-over. Since our one-step ALE finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, the remapping stage is not needed, making our algorithm a so-called direct ALE method.
NASA Technical Reports Server (NTRS)
Abarbanel, S.; Gottlieb, D.
1976-01-01
The paper considers the leap-frog finite-difference method (Kreiss and Oliger, 1973) for systems of partial differential equations of the form du/dt = dF/dx + dG/dy + dH/dz, where d denotes partial derivative, u is a q-component vector and a function of x, y, z, and t, and the vectors F, G, and H are functions of u only. The original leap-frog algorithm is shown to admit a modification that improves on the stability conditions for two and three dimensions by factors of 2 and 2.8, respectively, thereby permitting larger time steps. The scheme for three dimensions is considered optimal in the sense that it combines simple averaging and large time steps.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
1999-01-01
This paper presents a modification of the spring analogy scheme which uses axial linear spring stiffness with selective spring stiffening/relaxation. An alternate approach to solving the geometric conservation law is taken which eliminates the need for storage of metric Jacobians at previous time steps. Efficiency and verification are illustrated with several unsteady 2-D airfoil Euler computations. The method is next applied to the computation of the turbulent flow about a 2-D airfoil and wing with two and three- dimensional moving spoiler surfaces, and the results compared with Benchmark Active Controls Technology (BACT) experimental data. The aeroelastic response at low dynamic pressure of an airfoil to a single large scale oscillation of a spoiler surface is computed. This study confirms that it is possible to achieve accurate solutions with a very large time step for aeroelastic problems using the fluid solver and aeroelastic integrator as discussed in this paper.
Relaxation schemes for Chebyshev spectral multigrid methods
NASA Technical Reports Server (NTRS)
Kang, Yimin; Fulton, Scott R.
1993-01-01
Two relaxation schemes for Chebyshev spectral multigrid methods are presented for elliptic equations with Dirichlet boundary conditions. The first scheme is a pointwise-preconditioned Richardson relaxation scheme and the second is a line relaxation scheme. The line relaxation scheme provides an efficient and relatively simple approach for solving two-dimensional spectral equations. Numerical examples and comparisons with other methods are given.
A higher-order implicit IDO scheme and its CFD application to local mesh refinement method
NASA Astrophysics Data System (ADS)
Imai, Yohsuke; Aoki, Takayuki
2006-08-01
The Interpolated Differential Operator (IDO) scheme has been developed for the numerical solution of the fluid motion equations, and allows to produce highly accurate results by introducing the spatial derivative of the physical value as an additional dependent variable. For incompressible flows, semi-implicit time integration is strongly affected by the Courant and diffusion number limitation. A high-order fully-implicit IDO scheme is presented, and the two-stage implicit Runge-Kutta time integration keeps over third-order accuracy. The application of the method to the direct numerical simulation of turbulence demonstrates that the proposed scheme retains a resolution comparable to that of spectral methods even for relatively large Courant numbers. The scheme is further applied to the Local Mesh Refinement (LMR) method, where the size of the time step is often restricted by the dimension of the smallest meshes. In the computation of the Karman vortex street problem, the implicit IDO scheme with LMR is shown to allow a conspicuous saving of computational resources.
Central Upwind Scheme for a Compressible Two-Phase Flow Model
Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242
A semi-implicit gas-kinetic scheme for smooth flows
NASA Astrophysics Data System (ADS)
Wang, Peng; Guo, Zhaoli
2016-08-01
In this paper, a semi-implicit gas-kinetic scheme (SIGKS) is derived for smooth flows based on the Bhatnagar-Gross-Krook (BGK) equation. As a finite-volume scheme, the evolution of the average flow variables in a control volume is under the Eulerian framework, whereas the construction of the numerical flux across the cell interface comes from the Lagrangian perspective. The adoption of the Lagrangian aspect makes the collision and the transport mechanisms intrinsically coupled together in the flux evaluation. As a result, the time step size is independent of the particle collision time and solely determined by the Courant-Friedrichs-Lewy (CFL) condition. An analysis of the reconstructed distribution function at the cell interface shows that the SIGKS can be viewed as a modified Lax-Wendroff type scheme with an additional term. Furthermore, the addition term coming from the implicitness in the reconstruction is expected to be able to enhance the numerical stability of the scheme. A number of numerical tests of smooth flows with low and moderate Mach numbers are performed to benchmark the SIGKS. The results show that the method has second-order spatial accuracy, and can give accurate numerical solutions in comparison with benchmark results. It is also demonstrated that the numerical stability of the proposed scheme is better than the original GKS for smooth flows.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
The fundamentals of adaptive grid movement
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.
1990-01-01
Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2014-12-01
A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.
Adaptive clinical trial designs in oncology
Zang, Yong; Lee, J. Jack
2015-01-01
Adaptive designs have become popular in clinical trial and drug development. Unlike traditional trial designs, adaptive designs use accumulating data to modify the ongoing trial without undermining the integrity and validity of the trial. As a result, adaptive designs provide a flexible and effective way to conduct clinical trials. The designs have potential advantages of improving the study power, reducing sample size and total cost, treating more patients with more effective treatments, identifying efficacious drugs for specific subgroups of patients based on their biomarker profiles, and shortening the time for drug development. In this article, we review adaptive designs commonly used in clinical trials and investigate several aspects of the designs, including the dose-finding scheme, interim analysis, adaptive randomization, biomarker-guided randomization, and seamless designs. For illustration, we provide examples of real trials conducted with adaptive designs. We also discuss practical issues from the perspective of using adaptive designs in oncology trials. PMID:25811018
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
Convergence acceleration of implicit schemes in the presence of high aspect ratio grid cells
NASA Technical Reports Server (NTRS)
Buelow, B. E. O.; Venkateswaran, S.; Merkle, C. L.
1993-01-01
The performance of Navier-Stokes codes are influenced by several phenomena. For example, the robustness of the code may be compromised by the lack of grid resolution, by a need for more precise initial conditions or because all or part of the flowfield lies outside the flow regime in which the algorithm converges efficiently. A primary example of the latter effect is the presence of extended low Mach number and/or low Reynolds number regions which cause convergence deterioration of time marching algorithms. Recent research into this problem by several workers including the present authors has largely negated this difficulty through the introduction of time-derivative preconditioning. In the present paper, we employ the preconditioned algorithm to address convergence difficulties arising from sensitivity to grid stretching and high aspect ratio grid cells. Strong grid stretching is particularly characteristic of turbulent flow calculations where the grid must be refined very tightly in the dimension normal to the wall, without a similar refinement in the tangential direction. High aspect ratio grid cells also arise in problems that involve high aspect ratio domains such as combustor coolant channels. In both situations, the high aspect ratio cells can lead to extreme deterioration in convergence. It is the purpose of the present paper to address the reasons for this adverse response to grid stretching and to suggest methods for enhancing convergence under such circumstances. Numerical algorithms typically possess a maximum allowable or optimum value for the time step size, expressed in non-dimensional terms as a CFL number or vonNeumann number (VNN). In the presence of high aspect ratio cells, the smallest dimension of the grid cell controls the time step size causing it to be extremely small, which in turn results in the deterioration of convergence behavior. For explicit schemes, this time step limitation cannot be exceeded without violating stability restrictions
Implementation of a Semi-Lagrangian scheme for water vapour and tracer advection in RegCM4
NASA Astrophysics Data System (ADS)
Tefera Diro, Gulilat; Tompkins, Adrian; Giorgi, Filippo; Bonaventura, Luca
2013-04-01
A semi-Lagrangian approach is introduced in the latest version of the ICTP regional climate model (RegCM4) for water vapor and tracer advection. A 'quasi' cubic interpolation and McGregor's third order accurate trajectory calculation are used in the advection scheme. The modified scheme is evaluated on idealized as well as realistic case studies and its results are compared against those of the Eulerian scheme originally employed in RegCM4. In the idealized test cases the semi-Lagrangian scheme appears to be superior to the Eulerian scheme in terms of the dissipative and dispersive errors, especially when large gradients are present in the advected quantity. Two realistic cases of meso-scale phenomena over the European domain were also tested in a short range mode for specific humidity transport. In both cases, the semi-Lagrangian scheme has captured better the detailed structure and improved the overall pattern of the vertically integrated humidity field. In the present preliminary implementation, the scheme is more expensive than the Eulerian one. This is because the same time step is used for tracer advection as the explicit time discretization employed by the dynamical core. However, greater computational gains are expected as the number of tracers considered increases, for instance when the gas phase chemistry is switched on.
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun
2016-08-01
In this paper, for the first time a third-order compact gas-kinetic scheme is proposed on unstructured meshes for the compressible viscous flow computations. The possibility to design such a third-order compact scheme is due to the high-order gas evolution model, where a time-dependent gas distribution function at cell interface not only provides the fluxes across a cell interface, but also presents a time accurate solution for flow variables at cell interface. As a result, both cell averaged and cell interface flow variables can be used for the initial data reconstruction at the beginning of next time step. A weighted least-square procedure has been used for the initial reconstruction. Therefore, a compact third-order gas-kinetic scheme with the involvement of neighboring cells only can be developed on unstructured meshes. In comparison with other conventional high-order schemes, the current method avoids the Gaussian point integration for numerical fluxes along a cell interface and the multi-stage Runge-Kutta method for temporal accuracy. The third-order compact scheme is numerically stable under CFL condition CFL ≈ 0.5. Due to its multidimensional gas-kinetic formulation and the coupling of inviscid and viscous terms, even with unstructured meshes, the boundary layer solution and vortex structure can be accurately captured by the current scheme. At the same time, the compact scheme can capture strong shocks as well.
Uniformly high order accurate essentially non-oscillatory schemes 3
NASA Technical Reports Server (NTRS)
Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.
1986-01-01
In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.
Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Khokhlov, A. M.
1997-01-01
A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.
High resolution schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Harten, A.
1983-01-01
A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurate scheme to an appropriately modified flux function. The so-derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme. Numerical experiments are presented to demonstrate the performance of these new schemes.
NASA Astrophysics Data System (ADS)
Peters, Andre; Nehls, Thomas; Wessolek, Gerd
2016-06-01
Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.
Energy partitioning schemes: a dilemma.
Mayer, I
2007-01-01
Two closely related energy partitioning schemes, in which the total energy is presented as a sum of atomic and diatomic contributions by using the "atomic decomposition of identity", are compared on the example of N,N-dimethylformamide, a simple but chemically rich molecule. Both schemes account for different intramolecular interactions, for instance they identify the weak C-H...O intramolecular interactions, but give completely different numbers. (The energy decomposition scheme based on the virial theorem is also considered.) The comparison of the two schemes resulted in a dilemma which is especially striking when these schemes are applied for molecules distorted from their equilibrium structures: one either gets numbers which are "on the chemical scale" and have quite appealing values at the equilibrium molecular geometries, but exhibiting a counter-intuitive distance dependence (the two-center energy components increase in absolute value with the increase of the interatomic distances)--or numbers with too large absolute values but "correct" distance behaviour. The problem is connected with the quick decay of the diatomic kinetic energy components.
Energy partitioning schemes: a dilemma.
Mayer, I
2007-01-01
Two closely related energy partitioning schemes, in which the total energy is presented as a sum of atomic and diatomic contributions by using the "atomic decomposition of identity", are compared on the example of N,N-dimethylformamide, a simple but chemically rich molecule. Both schemes account for different intramolecular interactions, for instance they identify the weak C-H...O intramolecular interactions, but give completely different numbers. (The energy decomposition scheme based on the virial theorem is also considered.) The comparison of the two schemes resulted in a dilemma which is especially striking when these schemes are applied for molecules distorted from their equilibrium structures: one either gets numbers which are "on the chemical scale" and have quite appealing values at the equilibrium molecular geometries, but exhibiting a counter-intuitive distance dependence (the two-center energy components increase in absolute value with the increase of the interatomic distances)--or numbers with too large absolute values but "correct" distance behaviour. The problem is connected with the quick decay of the diatomic kinetic energy components. PMID:17328441
Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao
2011-04-19
This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.
NASA Astrophysics Data System (ADS)
Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao
2011-04-01
This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
NASA Astrophysics Data System (ADS)
Yu, Rixin; Yu, Jiangfei; Bai, Xue-Song
2012-06-01
We present an improved numerical scheme for numerical simulations of low Mach number turbulent reacting flows with detailed chemistry and transport. The method is based on a semi-implicit operator-splitting scheme with a stiff solver for integration of the chemical kinetic rates, developed by Knio et al. [O.M. Knio, H.N. Najm, P.S. Wyckoff, A semi-implicit numerical scheme for reacting flow II. Stiff, operator-split formulation, Journal of Computational Physics 154 (2) (1999) 428-467]. Using the material derivative form of continuity equation, we enhance the scheme to allow for large density ratio in the flow field. The scheme is developed for direct numerical simulation of turbulent reacting flow by employing high-order discretization for the spatial terms. The accuracy of the scheme in space and time is verified by examining the grid/time-step dependency on one-dimensional benchmark cases: a freely propagating premixed flame in an open environment and in an enclosure related to spark-ignition engines. The scheme is then examined in simulations of a two-dimensional laminar flame/vortex-pair interaction. Furthermore, we apply the scheme to direct numerical simulation of a homogeneous charge compression ignition (HCCI) process in an enclosure studied previously in the literature. Satisfactory agreement is found in terms of the overall ignition behavior, local reaction zone structures and statistical quantities. Finally, the scheme is used to study the development of intrinsic flame instabilities in a lean H2/air premixed flame, where it is shown that the spatial and temporary accuracies of numerical schemes can have great impact on the prediction of the sensitive nonlinear evolution process of flame instability.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Perego, A.; Cabezón, R. M.; Käppeli, R.
2016-04-01
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.
Adaptive Sampling in Hierarchical Simulation
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations.
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067
NASA Astrophysics Data System (ADS)
Guo, Ruihan; Xu, Yan
2015-10-01
In this paper, we present an efficient and unconditionally energy stable fully-discrete local discontinuous Galerkin (LDG) method for approximating the Cahn-Hilliard-Brinkman (CHB) system, which is comprised of a Cahn-Hilliard type equation and a generalized Brinkman equation modeling fluid flow. The semi-discrete energy stability of the LDG method is proved firstly. Due to the strict time step restriction (Δt = O (Δx4)) of explicit time discretization methods for stability, we introduce a semi-implicit scheme which consists of the implicit Euler method combined with a convex splitting of the discrete Cahn-Hilliard energy strategy for the temporal discretization. The unconditional energy stability of this fully-discrete convex splitting scheme is also proved. Obviously, the fully-discrete equations at the implicit time level are nonlinear, and to enhance the efficiency of the proposed approach, the nonlinear Full Approximation Scheme (FAS) multigrid method has been employed to solve this system of algebraic equations. We also show the nearly optimal complexity numerically. Numerical experiments based on the overall solution method of combining the proposed LDG method, convex splitting scheme and the nonlinear multigrid solver are given to validate the theoretical results and to show the effectiveness of the proposed approach for the CHB system.
Adaptive encoding in the visual pathway.
Lesica, Nicholas A; Boloori, Alireza S; Stanley, Garrett B
2003-02-01
In a natural setting, the mean luminance and contrast of the light within a visual neuron's receptive field are constantly changing as the eyes saccade across complex scenes. Adaptive mechanisms modulate filtering properties of the early visual pathway in response to these variations, allowing the system to maintain differential sensitivity to nonstationary stimuli. An adaptive variant of the reverse correlation technique is used to characterize these changes during single trials. Properties of the adaptive reverse correlation algorithm were investigated via simulation. Analysis of data collected from the mammalian visual system demonstrates the ability to continuously track adaptive changes in the encoding scheme. The adaptive estimation approach provides a framework for characterizing the role of adaptation in natural scene viewing. PMID:12613554
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Extension of Low Dissipative High Order Hydrodynamics Schemes for MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)
2002-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD (magnetohydrodynamic) equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls to limit the amount and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvi-linear grids. The three features of the present MHD scheme over existing schemes in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion magnetized flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike existing schemes for the conservative MHD equations which suffer from ill-conditioned eigen-decompositions, the present scheme makes use of a well-conditioned eigen-decomposition to solve the conservative form of the MHD equations. This is due to, partly. the fact that the divergence of the magnetic field condition is a different type of constraint from its incompressible Navier-Stokes cousin. Third, a new approach to minimize the numerical error of the divergence free magnetic condition for high order scheme is introduced.
Invisibly Sanitizable Digital Signature Scheme
NASA Astrophysics Data System (ADS)
Miyazaki, Kunihiko; Hanaoka, Goichiro; Imai, Hideki
A digital signature does not allow any alteration of the document to which it is attached. Appropriate alteration of some signed documents, however, should be allowed because there are security requirements other than the integrity of the document. In the disclosure of official information, for example, sensitive information such as personal information or national secrets is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is requested by a citizen. If this disclosure is done digitally by using the current digital signature schemes, the citizen cannot verify the disclosed information because it has been altered to prevent the leakage of sensitive information. The confidentiality of official information is thus incompatible with the integrity of that information, and this is called the digital document sanitizing problem. Conventional solutions such as content extraction signatures and digitally signed document sanitizing schemes with disclosure condition control can either let the sanitizer assign disclosure conditions or hide the number of sanitized portions. The digitally signed document sanitizing scheme we propose here is based on the aggregate signature derived from bilinear maps and can do both. Moreover, the proposed scheme can sanitize a signed document invisibly, that is, no one can distinguish whether the signed document has been sanitized or not.
Upwind Compact Finite Difference Schemes
NASA Astrophysics Data System (ADS)
Christie, I.
1985-07-01
It was shown by Ciment, Leventhal, and Weinberg ( J. Comput. Phys.28 (1978), 135) that the standard compact finite difference scheme may break down in convection dominated problems. An upwinding of the method, which maintains the fourth order accuracy, is suggested and favorable numerical results are found for a number of test problems.
On symmetric and upwind TVD schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.
1985-01-01
A class of explicit and implicit total variation diminishing (TVD) schemes for the compressible Euler and Navier-Stokes equations was developed. They do not generate spurious oscillations across shocks and contact discontinuities. In general, shocks can be captured within 1 to 2 grid points. For the inviscid case, these schemes are divided into upwind TVD schemes and symmetric (nonupwind) TVD schemes. The upwind TVD scheme is based on the second-order TVD scheme. The symmetric TVD scheme is a generalization of Roe's and Davis' TVD Lax-Wendroff scheme. The performance of these schemes on some viscous and inviscid airfoil steady-state calculations is investigated. The symmetric and upwind TVD schemes are compared.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
Data Bus Adapts to Changing Traffic Level
NASA Technical Reports Server (NTRS)
Lew, Eugene; Deruiter, John; Varga, Mike
1987-01-01
Access becomes timed when collisions threaten. Two-mode scheme used to grant terminals access to data bus. Causes bus to alternate between random accessibility and controlled accessibility to optimize performance and adapt to changing data-traffic conditions. Bus is part of 100-Mb/s optical-fiber packet data system.
Clipping in neurocontrol by adaptive dynamic programming.
Fairbank, Michael; Prokhorov, Danil; Alonso, Eduardo
2014-10-01
In adaptive dynamic programming, neurocontrol, and reinforcement learning, the objective is for an agent to learn to choose actions so as to minimize a total cost function. In this paper, we show that when discretized time is used to model the motion of the agent, it can be very important to do clipping on the motion of the agent in the final time step of the trajectory. By clipping, we mean that the final time step of the trajectory is to be truncated such that the agent stops exactly at the first terminal state reached, and no distance further. We demonstrate that when clipping is omitted, learning performance can fail to reach the optimum, and when clipping is done properly, learning performance can improve significantly. The clipping problem we describe affects algorithms that use explicit derivatives of the model functions of the environment to calculate a learning gradient. These include backpropagation through time for control and methods based on dual heuristic programming. However, the clipping problem does not significantly affect methods based on heuristic dynamic programming, temporal differences learning, or policy-gradient learning algorithms.
Adaptive path planning for flexible manufacturing
Chen, Pang C.
1994-08-01
Path planning needs to be fast to facilitate real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To overcome this difficulty, we present an adaptive algorithm that uses past experience to speed up future performance. It is a learning algorithm suitable for automating flexible manufacturing in incrementally-changing environments. The algorithm allows the robot to adapt to its environment by having two experience manipulation schemes: For minor environmental change, we use an object-attached experience abstraction scheme to increase the flexibility of the learned experience; for major environmental change, we use an on-demand experience repair scheme to retain those experiences that remain valid and useful. Using this algorithm, we can effectively reduce the overall robot planning time by re-using the computation result for one task to plan a path for another.
A Decentralized Adaptive Approach to Fault Tolerant Flight Control
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor
2000-01-01
This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
NASA Astrophysics Data System (ADS)
Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi
2011-10-01
This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Energy preservation and entropy in Lagrangian space- and time-staggered hydrodynamic schemes
NASA Astrophysics Data System (ADS)
Llor, Antoine; Claisse, Alexandra; Fochesato, Christophe
2016-03-01
Usual space- and time-staggered (STS) "leap-frog" Lagrangian hydrodynamic schemes-such as von Neumann-Richtmyer's (1950), Wilkins' (1964), and their variants-are widely used for their simplicity and robustness despite their known lack of exact energy conservation. Since the seminal work of Trulio and Trigger (1950) and despite the later corrections of Burton (1991), it is generally accepted that these schemes cannot be modified to exactly conserve energy while retaining all of the following properties: STS stencil with velocities half-time centered with respect to positions, explicit second-order algorithm (locally implicit for internal energy), and definite positive kinetic energy. It is shown here that it is actually possible to modify the usual STS hydrodynamic schemes in order to be exactly energy-preserving, regardless of the evenness of their time centering assumptions and retaining their simple algorithmic structure. Burton's conservative scheme (1991) is found as a special case of time centering which cancels the term here designated as "incompatible displacements residue." In contrast, von Neumann-Richtmyer's original centering can be preserved provided this residue is properly corrected. These two schemes are the only special cases able to capture isentropic flow with a third order entropy error, instead of second order in general. The momentum equation is presently obtained by application of a variational principle to an action integral discretized in both space and time. The internal energy equation follows from the discrete conservation of total energy. Entropy production by artificial dissipation is obtained to second order by a prediction-correction step on the momentum equation. The overall structure of the equations (explicit for momentum, locally implicit for internal energy) remains identical to that of usual STS "leap-frog" schemes, though complementary terms are required to correct the effects of time-step changes and artificial viscosity
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
Subranging scheme for SQUID sensors
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor)
2008-01-01
A readout scheme for measuring the output from a SQUID-based sensor-array using an improved subranging architecture that includes multiple resolution channels (such as a coarse resolution channel and a fine resolution channel). The scheme employs a flux sensing circuit with a sensing coil connected in series to multiple input coils, each input coil being coupled to a corresponding SQUID detection circuit having a high-resolution SQUID device with independent linearizing feedback. A two-resolution configuration (course and fine) is illustrated with a primary SQUID detection circuit for generating a fine readout, and a secondary SQUID detection circuit for generating a course readout, both having feedback current coupled to the respective SQUID devices via feedback/modulation coils. The primary and secondary SQUID detection circuits function and derive independent feedback. Thus, the SQUID devices may be monitored independently of each other (and read simultaneously) to dramatically increase slew rates and dynamic range.
NASA Astrophysics Data System (ADS)
Popa, Mihnea; Roth, Mike
2003-06-01
In this paper we study the relationship between two different compactifications of the space of vector bundle quotients of an arbitrary vector bundle on a curve. One is Grothendieck's Quot scheme, while the other is a moduli space of stable maps to the relative Grassmannian. We establish an essentially optimal upper bound on the dimension of the two compactifications. Based on that, we prove that for an arbitrary vector bundle, the Quot schemes of quotients of large degree are irreducible and generically smooth. We precisely describe all the vector bundles for which the same thing holds in the case of the moduli spaces of stable maps. We show that there are in general no natural morphisms between the two compactifications. Finally, as an application, we obtain new cases of a conjecture on effective base point freeness for pluritheta linear series on moduli spaces of vector bundles.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
Experimental investigation of adaptive control of a parallel manipulator
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami S.
1992-01-01
The implementation of a joint-space adaptive control scheme used to control non-compliant motion of a Stewart Platform-based Manipulator (SPBM) is presented. The SPBM is used in a facility called the Hardware Real-Time Emulator (HRTE) developed at Goddard Space Flight Center to emulate space operations. The SPBM is comprised of two platforms and six linear actuators driven by DC motors, and possesses six degrees of freedom. The report briefly reviews the development of the adaptive control scheme which is composed of proportional-derivative (PD) controllers whose gains are adjusted by an adaptation law driven by the errors between the desired and actual trajectories of the SPBM actuator lengths. The derivation of the adaptation law is based on the concept of model reference adaptive control (MRAC) and Lyapunov direct method under the assumption that SPBM motion is slow as compared to the controller adaptation rate. An experimental study is conducted to evaluate the performance of the adaptive control scheme implemented to control the SPBM to track a vertical and circular paths under step changes in payload. Experimental results show that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.
An adaptive morphological gradient lifting wavelet for detecting bearing defects
NASA Astrophysics Data System (ADS)
Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng
2012-05-01
This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.
Adaptive control of nonlinear systems with actuator failures and uncertainties
NASA Astrophysics Data System (ADS)
Tang, Xidong
2005-11-01
Actuator failures have damaging effect on the performance of control systems, leading to undesired system behavior or even instability. Actuator failures are unknown in terms of failure time instants, failure patterns, and failure parameters. For system safety and reliability, the compensation of actuator failures is of both theoretical and practical significance. This dissertation is to further the study of adaptive designs for actuator failure compensation to nonlinear systems. In this dissertation a theoretical framework for adaptive control of nonlinear systems with actuator failures and system uncertainties is established. The contributions are the development of new adaptive nonlinear control schemes to handle unknown actuator failures for convergent tracking performance, the specification of conditions as a guideline for applications and system designs, and the extension of the adaptive nonlinear control theory. In the dissertation, adaptive actuator failure compensation is studied for several classes of nonlinear systems. In particular, adaptive state feedback schemes are developed for feedback linearizable systems and parametric strict-feedback systems. Adaptive output feedback schemes are deigned for output-feedback systems and a class of systems with unknown state-dependent nonlinearities. Furthermore, adaptive designs are addressed for MIMO systems with actuator failures, based on two grouping techniques: fixed grouping and virtual grouping. Theoretical issues such as controller structures, actuation schemes, zero dynamics, observation, grouping conditions, closed-loop stability, and tracking performance are extensively investigated. For each scheme, design conditions are clarified, and detailed stability and performance analysis is presented. A variety of applications including a wing-rock model, twin otter aircraft, hypersonic aircraft, and cooperative multiple manipulators are addressed with simulation results showing the effectiveness of the
Quadtree-adaptive tsunami modelling
NASA Astrophysics Data System (ADS)
Popinet, Stéphane
2011-09-01
The well-balanced, positivity-preserving scheme of Audusse et al. (SIAM J Sci Comput 25(6):2050-2065, 2004), for the solution of the Saint-Venant equations with wetting and drying, is generalised to an adaptive quadtree spatial discretisation. The scheme is validated using an analytical solution for the oscillation of a fluid in a parabolic container, as well as the classic Monai tsunami laboratory benchmark. An efficient database system able to dynamically reconstruct a multiscale bathymetry based on extremely large datasets is also described. This combination of methods is successfully applied to the adaptive modelling of the 2004 Indian ocean tsunami. Adaptivity is shown to significantly decrease the exponent of the power law describing computational cost as a function of spatial resolution. The new exponent is directly related to the fractal dimension of the geometrical structures characterising tsunami propagation. The implementation of the method as well as the data and scripts necessary to reproduce the results presented are freely available as part of the open-source Gerris Flow Solver framework.
A biometric signcryption scheme without bilinear pairing
NASA Astrophysics Data System (ADS)
Wang, Mingwen; Ren, Zhiyuan; Cai, Jun; Zheng, Wentao
2013-03-01
How to apply the entropy in biometrics into the encryption and remote authentication schemes to simplify the management of keys is a hot research area. Utilizing Dodis's fuzzy extractor method and Liu's original signcryption scheme, a biometric identity based signcryption scheme is proposed in this paper. The proposed scheme is more efficient than most of the previous proposed biometric signcryption schemes for that it does not need bilinear pairing computation and modular exponentiation computation which is time consuming largely. The analysis results show that under the CDH and DL hard problem assumption, the proposed scheme has the features of confidentiality and unforgeability simultaneously.
AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics
NASA Astrophysics Data System (ADS)
Plewa, T.; Müller, E.
2001-08-01
Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.
NASA Astrophysics Data System (ADS)
Stipancic, Tomislav; Jerbic, Bojan
Light conditions represent an important part of every vision application. This paper describes one active behavioral scheme of one particular active vision system. This behavioral scheme enables an active system to adapt to current environmental conditions by constantly validating the amount of the reflected light using luminance meter and dynamically changed significant vision parameters. The purpose of the experiment was to determine the connections between light conditions and inner vision parameters. As a part of the experiment, Response Surface Methodology (RSM) was used to predict values of vision parameters with respect to luminance input values. RSM was used to approximate an unknown function for which only few values were computed. The main output validation system parameter is called Match Score. Match Score indicates how well the found object matches the learned model. All obtained data are stored in the local database. By timely applying new parameters predicted by the RSM, the vision application works in a stabile and robust manner.
The evolving role of stroke prediction schemes for patients with atrial fibrillation.
Ha, Andrew; Healey, Jeff S
2013-10-01
Our approach to managing patients with atrial fibrillation has changed substantially over the past 10 years, as a result of numerous high-quality observation studies and randomized trials. In this article, we will provide practical guidance for the use of oral anticoagulation therapy in patients with atrial fibrillation. We will review the evolution of stroke and bleeding risk prediction schemes and discuss their role in patient care. Initially, stroke prediction schemes were used to identify patients with atrial fibrillation at the highest risk of stroke, in whom the use of oral anticoagulant therapy was believed to be the most important. However; with the advent of new, safer, and more convenient oral anticoagulant drugs, the role of these schemes has shifted to the identification of the lowest risk patients, representing the minority of patients with atrial fibrillation, in whom oral anticoagulant therapy is not recommended. At the same time, schemes were developed to predict bleeding, the major risk of oral anticoagulant therapy. However; use of these schemes has been limited by their complexity and significant correlation with stroke schemes. In general, it is advisable to base the decision to use oral anticoagulation on the patient's stroke risk and use bleeding schemes to identify absolute contraindications or modifiable risk factors for bleeding. Prediction schemes have been useful clinical tools, invaluable in the design of clinical trials, and have assisted greatly in economic analyses. However, the nature and role of such schemes is now adapting to the current era of novel oral anticoagulant agents.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Robust adaptive control of MEMS triaxial gyroscope using fuzzy compensator.
Fei, Juntao; Zhou, Jian
2012-12-01
In this paper, a robust adaptive control strategy using a fuzzy compensator for MEMS triaxial gyroscope, which has system nonlinearities, including model uncertainties and external disturbances, is proposed. A fuzzy logic controller that could compensate for the model uncertainties and external disturbances is incorporated into the adaptive control scheme in the Lyapunov framework. The proposed adaptive fuzzy controller can guarantee the convergence and asymptotical stability of the closed-loop system. The proposed adaptive fuzzy control strategy does not depend on accurate mathematical models, which simplifies the design procedure. The innovative development of intelligent control methods incorporated with conventional control for the MEMS gyroscope is derived with the strict theoretical proof of the Lyapunov stability. Numerical simulations are investigated to verify the effectiveness of the proposed adaptive fuzzy control scheme and demonstrate the satisfactory tracking performance and robustness against model uncertainties and external disturbances compared with conventional adaptive control method.